2025-03-27 00:00:09.138288 | Job console starting... 2025-03-27 00:00:09.150736 | Updating repositories 2025-03-27 00:00:09.275147 | Preparing job workspace 2025-03-27 00:00:10.902047 | Running Ansible setup... 2025-03-27 00:00:17.291225 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-27 00:00:18.081597 | 2025-03-27 00:00:18.081706 | PLAY [Base pre] 2025-03-27 00:00:18.113501 | 2025-03-27 00:00:18.113608 | TASK [Setup log path fact] 2025-03-27 00:00:18.135531 | orchestrator | ok 2025-03-27 00:00:18.163884 | 2025-03-27 00:00:18.164003 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-27 00:00:18.205555 | orchestrator | ok 2025-03-27 00:00:18.233168 | 2025-03-27 00:00:18.233578 | TASK [emit-job-header : Print job information] 2025-03-27 00:00:18.346716 | # Job Information 2025-03-27 00:00:18.346880 | Ansible Version: 2.15.3 2025-03-27 00:00:18.346913 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-03-27 00:00:18.346943 | Pipeline: periodic-midnight 2025-03-27 00:00:18.346964 | Executor: 7d211f194f6a 2025-03-27 00:00:18.346983 | Triggered by: https://github.com/osism/testbed 2025-03-27 00:00:18.347002 | Event ID: bfc31a129e22418ebf35e4a8061472cc 2025-03-27 00:00:18.371101 | 2025-03-27 00:00:18.371219 | LOOP [emit-job-header : Print node information] 2025-03-27 00:00:18.518832 | orchestrator | ok: 2025-03-27 00:00:18.519038 | orchestrator | # Node Information 2025-03-27 00:00:18.519078 | orchestrator | Inventory Hostname: orchestrator 2025-03-27 00:00:18.519098 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-27 00:00:18.519116 | orchestrator | Username: zuul-testbed04 2025-03-27 00:00:18.519132 | orchestrator | Distro: Debian 12.10 2025-03-27 00:00:18.519151 | orchestrator | Provider: static-testbed 2025-03-27 00:00:18.519168 | orchestrator | Label: testbed-orchestrator 2025-03-27 00:00:18.519184 | orchestrator | Product Name: OpenStack Nova 2025-03-27 00:00:18.519200 | orchestrator | Interface IP: 81.163.193.140 2025-03-27 00:00:18.544913 | 2025-03-27 00:00:18.545029 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-27 00:00:19.288571 | orchestrator -> localhost | changed 2025-03-27 00:00:19.301568 | 2025-03-27 00:00:19.301661 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-27 00:00:20.289874 | orchestrator -> localhost | changed 2025-03-27 00:00:20.311230 | 2025-03-27 00:00:20.311327 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-27 00:00:20.719746 | orchestrator -> localhost | ok 2025-03-27 00:00:20.726956 | 2025-03-27 00:00:20.727076 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-27 00:00:20.756807 | orchestrator | ok 2025-03-27 00:00:20.779144 | orchestrator | included: /var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-27 00:00:20.797666 | 2025-03-27 00:00:20.797735 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-27 00:00:21.501944 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-27 00:00:21.502132 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/9b1e4d12f4194a679b3d2d6e2f315612_id_rsa 2025-03-27 00:00:21.502163 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/9b1e4d12f4194a679b3d2d6e2f315612_id_rsa.pub 2025-03-27 00:00:21.502183 | orchestrator -> localhost | The key fingerprint is: 2025-03-27 00:00:21.502202 | orchestrator -> localhost | SHA256:cMsVUYAUlVyJzgc9jA/9jWTykrdPr4XT0E7PgEtRs0o zuul-build-sshkey 2025-03-27 00:00:21.502220 | orchestrator -> localhost | The key's randomart image is: 2025-03-27 00:00:21.502236 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-27 00:00:21.502252 | orchestrator -> localhost | | .o==%o.o | 2025-03-27 00:00:21.502268 | orchestrator -> localhost | | . B.B.oo | 2025-03-27 00:00:21.502292 | orchestrator -> localhost | | . .o.+EO.o | 2025-03-27 00:00:21.502309 | orchestrator -> localhost | | + oo.=+=..| 2025-03-27 00:00:21.502325 | orchestrator -> localhost | | S .+oo.o| 2025-03-27 00:00:21.502341 | orchestrator -> localhost | | . ..Oo| 2025-03-27 00:00:21.502362 | orchestrator -> localhost | | . ooB| 2025-03-27 00:00:21.502379 | orchestrator -> localhost | | oo| 2025-03-27 00:00:21.502395 | orchestrator -> localhost | | .. | 2025-03-27 00:00:21.502412 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-27 00:00:21.502456 | orchestrator -> localhost | ok: Runtime: 0:00:00.163596 2025-03-27 00:00:21.509686 | 2025-03-27 00:00:21.509774 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-27 00:00:21.559620 | orchestrator | ok 2025-03-27 00:00:21.570206 | orchestrator | included: /var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-27 00:00:21.590924 | 2025-03-27 00:00:21.590999 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-27 00:00:21.635246 | orchestrator | skipping: Conditional result was False 2025-03-27 00:00:21.642657 | 2025-03-27 00:00:21.642735 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-27 00:00:22.307972 | orchestrator | changed 2025-03-27 00:00:22.314644 | 2025-03-27 00:00:22.314727 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-27 00:00:22.601298 | orchestrator | ok 2025-03-27 00:00:22.616713 | 2025-03-27 00:00:22.616806 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-27 00:00:23.159280 | orchestrator | ok 2025-03-27 00:00:23.165762 | 2025-03-27 00:00:23.165849 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-27 00:00:23.555994 | orchestrator | ok 2025-03-27 00:00:23.670707 | 2025-03-27 00:00:23.670869 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-27 00:00:23.720583 | orchestrator | skipping: Conditional result was False 2025-03-27 00:00:23.731963 | 2025-03-27 00:00:23.732103 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-27 00:00:24.343636 | orchestrator -> localhost | changed 2025-03-27 00:00:24.367374 | 2025-03-27 00:00:24.367501 | TASK [add-build-sshkey : Add back temp key] 2025-03-27 00:00:24.758919 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/9b1e4d12f4194a679b3d2d6e2f315612_id_rsa (zuul-build-sshkey) 2025-03-27 00:00:24.759352 | orchestrator -> localhost | ok: Runtime: 0:00:00.007197 2025-03-27 00:00:24.776111 | 2025-03-27 00:00:24.776210 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-27 00:00:25.147586 | orchestrator | ok 2025-03-27 00:00:25.155096 | 2025-03-27 00:00:25.155188 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-27 00:00:25.222800 | orchestrator | skipping: Conditional result was False 2025-03-27 00:00:25.246568 | 2025-03-27 00:00:25.246668 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-27 00:00:25.797113 | orchestrator | ok 2025-03-27 00:00:25.808899 | 2025-03-27 00:00:25.808982 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-27 00:00:25.849884 | orchestrator | ok 2025-03-27 00:00:25.856254 | 2025-03-27 00:00:25.856332 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-27 00:00:26.189989 | orchestrator -> localhost | ok 2025-03-27 00:00:26.199441 | 2025-03-27 00:00:26.199518 | TASK [validate-host : Collect information about the host] 2025-03-27 00:00:27.369593 | orchestrator | ok 2025-03-27 00:00:27.383674 | 2025-03-27 00:00:27.383757 | TASK [validate-host : Sanitize hostname] 2025-03-27 00:00:27.445994 | orchestrator | ok 2025-03-27 00:00:27.453112 | 2025-03-27 00:00:27.453198 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-27 00:00:27.976615 | orchestrator -> localhost | changed 2025-03-27 00:00:27.984322 | 2025-03-27 00:00:27.984479 | TASK [validate-host : Collect information about zuul worker] 2025-03-27 00:00:28.486721 | orchestrator | ok 2025-03-27 00:00:28.500041 | 2025-03-27 00:00:28.500177 | TASK [validate-host : Write out all zuul information for each host] 2025-03-27 00:00:29.005110 | orchestrator -> localhost | changed 2025-03-27 00:00:29.019466 | 2025-03-27 00:00:29.019591 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-27 00:00:29.286618 | orchestrator | ok 2025-03-27 00:00:29.294919 | 2025-03-27 00:00:29.295021 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-27 00:00:52.348988 | orchestrator | changed: 2025-03-27 00:00:52.349282 | orchestrator | .d..t...... src/ 2025-03-27 00:00:52.349335 | orchestrator | .d..t...... src/github.com/ 2025-03-27 00:00:52.349372 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-27 00:00:52.349404 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-27 00:00:52.349433 | orchestrator | RedHat.yml 2025-03-27 00:00:52.368052 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-27 00:00:52.368084 | orchestrator | RedHat.yml 2025-03-27 00:00:52.368145 | orchestrator | = 2.2.0"... 2025-03-27 00:01:10.575391 | orchestrator | 00:01:10.574 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-03-27 00:01:10.631464 | orchestrator | 00:01:10.631 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-03-27 00:01:11.905040 | orchestrator | 00:01:11.904 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-27 00:01:12.806277 | orchestrator | 00:01:12.806 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-27 00:01:13.966405 | orchestrator | 00:01:13.966 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-27 00:01:14.894797 | orchestrator | 00:01:14.894 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-27 00:01:16.370496 | orchestrator | 00:01:16.370 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-27 00:01:17.728361 | orchestrator | 00:01:17.727 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-27 00:01:17.728449 | orchestrator | 00:01:17.728 STDOUT terraform: Providers are signed by their developers. 2025-03-27 00:01:17.728461 | orchestrator | 00:01:17.728 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-27 00:01:17.728492 | orchestrator | 00:01:17.728 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-27 00:01:17.729717 | orchestrator | 00:01:17.728 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-27 00:01:17.882792 | orchestrator | 00:01:17.728 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-27 00:01:17.882896 | orchestrator | 00:01:17.728 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-27 00:01:17.882915 | orchestrator | 00:01:17.728 STDOUT terraform: you run "tofu init" in the future. 2025-03-27 00:01:17.882931 | orchestrator | 00:01:17.728 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-27 00:01:17.882945 | orchestrator | 00:01:17.728 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-27 00:01:17.882959 | orchestrator | 00:01:17.729 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-27 00:01:17.882974 | orchestrator | 00:01:17.729 STDOUT terraform: should now work. 2025-03-27 00:01:17.882996 | orchestrator | 00:01:17.729 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-27 00:01:17.883019 | orchestrator | 00:01:17.729 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-27 00:01:17.883034 | orchestrator | 00:01:17.729 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-27 00:01:17.883083 | orchestrator | 00:01:17.882 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-03-27 00:01:18.058247 | orchestrator | 00:01:18.058 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-27 00:01:18.058336 | orchestrator | 00:01:18.058 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-27 00:01:18.058362 | orchestrator | 00:01:18.058 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-27 00:01:18.257703 | orchestrator | 00:01:18.058 STDOUT terraform: for this configuration. 2025-03-27 00:01:18.257787 | orchestrator | 00:01:18.257 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-03-27 00:01:18.341812 | orchestrator | 00:01:18.341 STDOUT terraform: ci.auto.tfvars 2025-03-27 00:01:18.486670 | orchestrator | 00:01:18.486 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-03-27 00:01:19.321195 | orchestrator | 00:01:19.320 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-27 00:01:19.829456 | orchestrator | 00:01:19.829 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-27 00:01:20.022164 | orchestrator | 00:01:20.022 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-27 00:01:20.022217 | orchestrator | 00:01:20.022 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-27 00:01:20.022276 | orchestrator | 00:01:20.022 STDOUT terraform:  + create 2025-03-27 00:01:20.022289 | orchestrator | 00:01:20.022 STDOUT terraform:  <= read (data resources) 2025-03-27 00:01:20.022304 | orchestrator | 00:01:20.022 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-27 00:01:20.022312 | orchestrator | 00:01:20.022 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-27 00:01:20.022317 | orchestrator | 00:01:20.022 STDOUT terraform:  # (config refers to values not yet known) 2025-03-27 00:01:20.022324 | orchestrator | 00:01:20.022 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-27 00:01:20.022350 | orchestrator | 00:01:20.022 STDOUT terraform:  + checksum = (known after apply) 2025-03-27 00:01:20.022379 | orchestrator | 00:01:20.022 STDOUT terraform:  + created_at = (known after apply) 2025-03-27 00:01:20.022431 | orchestrator | 00:01:20.022 STDOUT terraform:  + file = (known after apply) 2025-03-27 00:01:20.022448 | orchestrator | 00:01:20.022 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.022473 | orchestrator | 00:01:20.022 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.022514 | orchestrator | 00:01:20.022 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-27 00:01:20.022530 | orchestrator | 00:01:20.022 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-27 00:01:20.022551 | orchestrator | 00:01:20.022 STDOUT terraform:  + most_recent = true 2025-03-27 00:01:20.022584 | orchestrator | 00:01:20.022 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.022600 | orchestrator | 00:01:20.022 STDOUT terraform:  + protected = (known after apply) 2025-03-27 00:01:20.022630 | orchestrator | 00:01:20.022 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.022657 | orchestrator | 00:01:20.022 STDOUT terraform:  + schema = (known after apply) 2025-03-27 00:01:20.022697 | orchestrator | 00:01:20.022 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-27 00:01:20.022710 | orchestrator | 00:01:20.022 STDOUT terraform:  + tags = (known after apply) 2025-03-27 00:01:20.022742 | orchestrator | 00:01:20.022 STDOUT terraform:  + updated_at = (known after apply) 2025-03-27 00:01:20.022749 | orchestrator | 00:01:20.022 STDOUT terraform:  } 2025-03-27 00:01:20.022802 | orchestrator | 00:01:20.022 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-27 00:01:20.022829 | orchestrator | 00:01:20.022 STDOUT terraform:  # (config refers to values not yet known) 2025-03-27 00:01:20.022864 | orchestrator | 00:01:20.022 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-27 00:01:20.022892 | orchestrator | 00:01:20.022 STDOUT terraform:  + checksum = (known after apply) 2025-03-27 00:01:20.022918 | orchestrator | 00:01:20.022 STDOUT terraform:  + created_at = (known after apply) 2025-03-27 00:01:20.022947 | orchestrator | 00:01:20.022 STDOUT terraform:  + file = (known after apply) 2025-03-27 00:01:20.022974 | orchestrator | 00:01:20.022 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.023008 | orchestrator | 00:01:20.022 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.023029 | orchestrator | 00:01:20.022 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-27 00:01:20.023056 | orchestrator | 00:01:20.023 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-27 00:01:20.023076 | orchestrator | 00:01:20.023 STDOUT terraform:  + most_recent = true 2025-03-27 00:01:20.023106 | orchestrator | 00:01:20.023 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.023132 | orchestrator | 00:01:20.023 STDOUT terraform:  + protected = (known after apply) 2025-03-27 00:01:20.023159 | orchestrator | 00:01:20.023 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.023189 | orchestrator | 00:01:20.023 STDOUT terraform:  + schema = (known after apply) 2025-03-27 00:01:20.023213 | orchestrator | 00:01:20.023 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-27 00:01:20.023240 | orchestrator | 00:01:20.023 STDOUT terraform:  + tags = (known after apply) 2025-03-27 00:01:20.023271 | orchestrator | 00:01:20.023 STDOUT terraform:  + updated_at = (known after apply) 2025-03-27 00:01:20.023300 | orchestrator | 00:01:20.023 STDOUT terraform:  } 2025-03-27 00:01:20.023307 | orchestrator | 00:01:20.023 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-27 00:01:20.023330 | orchestrator | 00:01:20.023 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-27 00:01:20.023367 | orchestrator | 00:01:20.023 STDOUT terraform:  + content = (known after apply) 2025-03-27 00:01:20.023400 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-27 00:01:20.023443 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-27 00:01:20.023479 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-27 00:01:20.023526 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-27 00:01:20.023546 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-27 00:01:20.023580 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-27 00:01:20.023607 | orchestrator | 00:01:20.023 STDOUT terraform:  + directory_permission = "0777" 2025-03-27 00:01:20.023624 | orchestrator | 00:01:20.023 STDOUT terraform:  + file_permission = "0644" 2025-03-27 00:01:20.023658 | orchestrator | 00:01:20.023 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-27 00:01:20.023694 | orchestrator | 00:01:20.023 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.023701 | orchestrator | 00:01:20.023 STDOUT terraform:  } 2025-03-27 00:01:20.023729 | orchestrator | 00:01:20.023 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-27 00:01:20.023762 | orchestrator | 00:01:20.023 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-27 00:01:20.023789 | orchestrator | 00:01:20.023 STDOUT terraform:  + content = (known after apply) 2025-03-27 00:01:20.023822 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-27 00:01:20.023858 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-27 00:01:20.023887 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-27 00:01:20.023927 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-27 00:01:20.023956 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-27 00:01:20.023989 | orchestrator | 00:01:20.023 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-27 00:01:20.024025 | orchestrator | 00:01:20.023 STDOUT terraform:  + directory_permission = "0777" 2025-03-27 00:01:20.024032 | orchestrator | 00:01:20.024 STDOUT terraform:  + file_permission = "0644" 2025-03-27 00:01:20.024063 | orchestrator | 00:01:20.024 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-27 00:01:20.024109 | orchestrator | 00:01:20.024 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.024130 | orchestrator | 00:01:20.024 STDOUT terraform:  } 2025-03-27 00:01:20.024136 | orchestrator | 00:01:20.024 STDOUT terraform:  # local_file.inventory will be created 2025-03-27 00:01:20.024154 | orchestrator | 00:01:20.024 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-27 00:01:20.024191 | orchestrator | 00:01:20.024 STDOUT terraform:  + content = (known after apply) 2025-03-27 00:01:20.024220 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-27 00:01:20.024261 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-27 00:01:20.024289 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-27 00:01:20.024323 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-27 00:01:20.024357 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-27 00:01:20.024387 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-27 00:01:20.024435 | orchestrator | 00:01:20.024 STDOUT terraform:  + directory_permission = "0777" 2025-03-27 00:01:20.024444 | orchestrator | 00:01:20.024 STDOUT terraform:  + file_permission = "0644" 2025-03-27 00:01:20.024471 | orchestrator | 00:01:20.024 STDOUT terraform:  + filename = "inventory.ci" 2025-03-27 00:01:20.024505 | orchestrator | 00:01:20.024 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.024511 | orchestrator | 00:01:20.024 STDOUT terraform:  } 2025-03-27 00:01:20.024543 | orchestrator | 00:01:20.024 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-27 00:01:20.024571 | orchestrator | 00:01:20.024 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-27 00:01:20.024601 | orchestrator | 00:01:20.024 STDOUT terraform:  + content = (sensitive value) 2025-03-27 00:01:20.024635 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-27 00:01:20.024670 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-27 00:01:20.024704 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-27 00:01:20.024737 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-27 00:01:20.024774 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-27 00:01:20.024801 | orchestrator | 00:01:20.024 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-27 00:01:20.024824 | orchestrator | 00:01:20.024 STDOUT terraform:  + directory_permission = "0700" 2025-03-27 00:01:20.024847 | orchestrator | 00:01:20.024 STDOUT terraform:  + file_permission = "0600" 2025-03-27 00:01:20.024876 | orchestrator | 00:01:20.024 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-27 00:01:20.024922 | orchestrator | 00:01:20.024 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.024951 | orchestrator | 00:01:20.024 STDOUT terraform:  } 2025-03-27 00:01:20.024958 | orchestrator | 00:01:20.024 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-27 00:01:20.024980 | orchestrator | 00:01:20.024 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-27 00:01:20.025001 | orchestrator | 00:01:20.024 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.025008 | orchestrator | 00:01:20.024 STDOUT terraform:  } 2025-03-27 00:01:20.025058 | orchestrator | 00:01:20.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-27 00:01:20.025112 | orchestrator | 00:01:20.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-27 00:01:20.025133 | orchestrator | 00:01:20.025 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.025153 | orchestrator | 00:01:20.025 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.025196 | orchestrator | 00:01:20.025 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.025213 | orchestrator | 00:01:20.025 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.025242 | orchestrator | 00:01:20.025 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.025280 | orchestrator | 00:01:20.025 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-27 00:01:20.025310 | orchestrator | 00:01:20.025 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.025329 | orchestrator | 00:01:20.025 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.025361 | orchestrator | 00:01:20.025 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.025451 | orchestrator | 00:01:20.025 STDOUT terraform:  } 2025-03-27 00:01:20.025460 | orchestrator | 00:01:20.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-27 00:01:20.025489 | orchestrator | 00:01:20.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.025529 | orchestrator | 00:01:20.025 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.025536 | orchestrator | 00:01:20.025 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.025566 | orchestrator | 00:01:20.025 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.025611 | orchestrator | 00:01:20.025 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.025619 | orchestrator | 00:01:20.025 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.025659 | orchestrator | 00:01:20.025 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-27 00:01:20.025693 | orchestrator | 00:01:20.025 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.025700 | orchestrator | 00:01:20.025 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.025722 | orchestrator | 00:01:20.025 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.025729 | orchestrator | 00:01:20.025 STDOUT terraform:  } 2025-03-27 00:01:20.025776 | orchestrator | 00:01:20.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-27 00:01:20.025821 | orchestrator | 00:01:20.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.025856 | orchestrator | 00:01:20.025 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.025863 | orchestrator | 00:01:20.025 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.025895 | orchestrator | 00:01:20.025 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.025939 | orchestrator | 00:01:20.025 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.025955 | orchestrator | 00:01:20.025 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.025995 | orchestrator | 00:01:20.025 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-27 00:01:20.026040 | orchestrator | 00:01:20.025 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.026055 | orchestrator | 00:01:20.026 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.026077 | orchestrator | 00:01:20.026 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.026084 | orchestrator | 00:01:20.026 STDOUT terraform:  } 2025-03-27 00:01:20.026131 | orchestrator | 00:01:20.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-27 00:01:20.026174 | orchestrator | 00:01:20.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.026204 | orchestrator | 00:01:20.026 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.026224 | orchestrator | 00:01:20.026 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.026255 | orchestrator | 00:01:20.026 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.026285 | orchestrator | 00:01:20.026 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.026314 | orchestrator | 00:01:20.026 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.026360 | orchestrator | 00:01:20.026 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-27 00:01:20.026383 | orchestrator | 00:01:20.026 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.026403 | orchestrator | 00:01:20.026 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.026441 | orchestrator | 00:01:20.026 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.026456 | orchestrator | 00:01:20.026 STDOUT terraform:  } 2025-03-27 00:01:20.026495 | orchestrator | 00:01:20.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-27 00:01:20.026539 | orchestrator | 00:01:20.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.026567 | orchestrator | 00:01:20.026 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.026587 | orchestrator | 00:01:20.026 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.026618 | orchestrator | 00:01:20.026 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.026648 | orchestrator | 00:01:20.026 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.026678 | orchestrator | 00:01:20.026 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.026715 | orchestrator | 00:01:20.026 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-27 00:01:20.026744 | orchestrator | 00:01:20.026 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.026764 | orchestrator | 00:01:20.026 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.026785 | orchestrator | 00:01:20.026 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.026791 | orchestrator | 00:01:20.026 STDOUT terraform:  } 2025-03-27 00:01:20.026840 | orchestrator | 00:01:20.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-27 00:01:20.026884 | orchestrator | 00:01:20.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.026914 | orchestrator | 00:01:20.026 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.026934 | orchestrator | 00:01:20.026 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.026964 | orchestrator | 00:01:20.026 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.026994 | orchestrator | 00:01:20.026 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.027024 | orchestrator | 00:01:20.026 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.027061 | orchestrator | 00:01:20.027 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-27 00:01:20.027090 | orchestrator | 00:01:20.027 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.027112 | orchestrator | 00:01:20.027 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.027132 | orchestrator | 00:01:20.027 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.027139 | orchestrator | 00:01:20.027 STDOUT terraform:  } 2025-03-27 00:01:20.027186 | orchestrator | 00:01:20.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-27 00:01:20.027230 | orchestrator | 00:01:20.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-27 00:01:20.027260 | orchestrator | 00:01:20.027 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.027280 | orchestrator | 00:01:20.027 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.027310 | orchestrator | 00:01:20.027 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.027339 | orchestrator | 00:01:20.027 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.027368 | orchestrator | 00:01:20.027 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.027406 | orchestrator | 00:01:20.027 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-27 00:01:20.027443 | orchestrator | 00:01:20.027 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.027463 | orchestrator | 00:01:20.027 STDOUT terraform:  + size = 80 2025-03-27 00:01:20.027484 | orchestrator | 00:01:20.027 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.027490 | orchestrator | 00:01:20.027 STDOUT terraform:  } 2025-03-27 00:01:20.027535 | orchestrator | 00:01:20.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-27 00:01:20.027577 | orchestrator | 00:01:20.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.027606 | orchestrator | 00:01:20.027 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.027626 | orchestrator | 00:01:20.027 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.027658 | orchestrator | 00:01:20.027 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.027686 | orchestrator | 00:01:20.027 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.027723 | orchestrator | 00:01:20.027 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-27 00:01:20.027752 | orchestrator | 00:01:20.027 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.027772 | orchestrator | 00:01:20.027 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.027793 | orchestrator | 00:01:20.027 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.027800 | orchestrator | 00:01:20.027 STDOUT terraform:  } 2025-03-27 00:01:20.027845 | orchestrator | 00:01:20.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-27 00:01:20.027886 | orchestrator | 00:01:20.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.027916 | orchestrator | 00:01:20.027 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.027935 | orchestrator | 00:01:20.027 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.027965 | orchestrator | 00:01:20.027 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.027995 | orchestrator | 00:01:20.027 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.028031 | orchestrator | 00:01:20.027 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-27 00:01:20.028062 | orchestrator | 00:01:20.028 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.028082 | orchestrator | 00:01:20.028 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.028102 | orchestrator | 00:01:20.028 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.028108 | orchestrator | 00:01:20.028 STDOUT terraform:  } 2025-03-27 00:01:20.028153 | orchestrator | 00:01:20.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-27 00:01:20.028195 | orchestrator | 00:01:20.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.028224 | orchestrator | 00:01:20.028 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.028243 | orchestrator | 00:01:20.028 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.028274 | orchestrator | 00:01:20.028 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.028304 | orchestrator | 00:01:20.028 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.028340 | orchestrator | 00:01:20.028 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-27 00:01:20.028370 | orchestrator | 00:01:20.028 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.028389 | orchestrator | 00:01:20.028 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.028415 | orchestrator | 00:01:20.028 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.028422 | orchestrator | 00:01:20.028 STDOUT terraform:  } 2025-03-27 00:01:20.028468 | orchestrator | 00:01:20.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-27 00:01:20.028510 | orchestrator | 00:01:20.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.028540 | orchestrator | 00:01:20.028 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.028560 | orchestrator | 00:01:20.028 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.028589 | orchestrator | 00:01:20.028 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.028619 | orchestrator | 00:01:20.028 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.028674 | orchestrator | 00:01:20.028 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-27 00:01:20.028706 | orchestrator | 00:01:20.028 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.028725 | orchestrator | 00:01:20.028 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.028746 | orchestrator | 00:01:20.028 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.028752 | orchestrator | 00:01:20.028 STDOUT terraform:  } 2025-03-27 00:01:20.028797 | orchestrator | 00:01:20.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-27 00:01:20.028839 | orchestrator | 00:01:20.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.028869 | orchestrator | 00:01:20.028 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.028889 | orchestrator | 00:01:20.028 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.028919 | orchestrator | 00:01:20.028 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.028949 | orchestrator | 00:01:20.028 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.028985 | orchestrator | 00:01:20.028 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-27 00:01:20.029014 | orchestrator | 00:01:20.028 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.029034 | orchestrator | 00:01:20.029 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.029054 | orchestrator | 00:01:20.029 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.029061 | orchestrator | 00:01:20.029 STDOUT terraform:  } 2025-03-27 00:01:20.029106 | orchestrator | 00:01:20.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-27 00:01:20.029148 | orchestrator | 00:01:20.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.029177 | orchestrator | 00:01:20.029 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.029197 | orchestrator | 00:01:20.029 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.029229 | orchestrator | 00:01:20.029 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.029262 | orchestrator | 00:01:20.029 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.029296 | orchestrator | 00:01:20.029 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-27 00:01:20.029326 | orchestrator | 00:01:20.029 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.029347 | orchestrator | 00:01:20.029 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.029367 | orchestrator | 00:01:20.029 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.029374 | orchestrator | 00:01:20.029 STDOUT terraform:  } 2025-03-27 00:01:20.029430 | orchestrator | 00:01:20.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-27 00:01:20.029465 | orchestrator | 00:01:20.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.029494 | orchestrator | 00:01:20.029 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.029507 | orchestrator | 00:01:20.029 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.029539 | orchestrator | 00:01:20.029 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.029569 | orchestrator | 00:01:20.029 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.029605 | orchestrator | 00:01:20.029 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-27 00:01:20.029635 | orchestrator | 00:01:20.029 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.029654 | orchestrator | 00:01:20.029 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.029676 | orchestrator | 00:01:20.029 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.029683 | orchestrator | 00:01:20.029 STDOUT terraform:  } 2025-03-27 00:01:20.029727 | orchestrator | 00:01:20.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-27 00:01:20.029768 | orchestrator | 00:01:20.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.029797 | orchestrator | 00:01:20.029 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.029818 | orchestrator | 00:01:20.029 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.029848 | orchestrator | 00:01:20.029 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.029877 | orchestrator | 00:01:20.029 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.029913 | orchestrator | 00:01:20.029 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-27 00:01:20.029943 | orchestrator | 00:01:20.029 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.029959 | orchestrator | 00:01:20.029 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.029980 | orchestrator | 00:01:20.029 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.029986 | orchestrator | 00:01:20.029 STDOUT terraform:  } 2025-03-27 00:01:20.030045 | orchestrator | 00:01:20.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-27 00:01:20.030086 | orchestrator | 00:01:20.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.030115 | orchestrator | 00:01:20.030 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.030135 | orchestrator | 00:01:20.030 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.030165 | orchestrator | 00:01:20.030 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.030195 | orchestrator | 00:01:20.030 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.030231 | orchestrator | 00:01:20.030 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-27 00:01:20.030262 | orchestrator | 00:01:20.030 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.030283 | orchestrator | 00:01:20.030 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.030303 | orchestrator | 00:01:20.030 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.030310 | orchestrator | 00:01:20.030 STDOUT terraform:  } 2025-03-27 00:01:20.030356 | orchestrator | 00:01:20.030 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-27 00:01:20.030397 | orchestrator | 00:01:20.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.030441 | orchestrator | 00:01:20.030 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.030461 | orchestrator | 00:01:20.030 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.030491 | orchestrator | 00:01:20.030 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.030521 | orchestrator | 00:01:20.030 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.030558 | orchestrator | 00:01:20.030 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-27 00:01:20.030588 | orchestrator | 00:01:20.030 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.030607 | orchestrator | 00:01:20.030 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.030628 | orchestrator | 00:01:20.030 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.030634 | orchestrator | 00:01:20.030 STDOUT terraform:  } 2025-03-27 00:01:20.030687 | orchestrator | 00:01:20.030 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-27 00:01:20.030729 | orchestrator | 00:01:20.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.030758 | orchestrator | 00:01:20.030 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.030778 | orchestrator | 00:01:20.030 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.030808 | orchestrator | 00:01:20.030 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.030840 | orchestrator | 00:01:20.030 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.030875 | orchestrator | 00:01:20.030 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-27 00:01:20.030904 | orchestrator | 00:01:20.030 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.030924 | orchestrator | 00:01:20.030 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.030944 | orchestrator | 00:01:20.030 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.030951 | orchestrator | 00:01:20.030 STDOUT terraform:  } 2025-03-27 00:01:20.030996 | orchestrator | 00:01:20.030 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-27 00:01:20.031038 | orchestrator | 00:01:20.030 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.031067 | orchestrator | 00:01:20.031 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.031087 | orchestrator | 00:01:20.031 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.031118 | orchestrator | 00:01:20.031 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.031148 | orchestrator | 00:01:20.031 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.031184 | orchestrator | 00:01:20.031 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-27 00:01:20.031214 | orchestrator | 00:01:20.031 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.031233 | orchestrator | 00:01:20.031 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.031253 | orchestrator | 00:01:20.031 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.031260 | orchestrator | 00:01:20.031 STDOUT terraform:  } 2025-03-27 00:01:20.031305 | orchestrator | 00:01:20.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-27 00:01:20.031347 | orchestrator | 00:01:20.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.031376 | orchestrator | 00:01:20.031 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.031396 | orchestrator | 00:01:20.031 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.031433 | orchestrator | 00:01:20.031 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.031462 | orchestrator | 00:01:20.031 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.031499 | orchestrator | 00:01:20.031 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-27 00:01:20.031529 | orchestrator | 00:01:20.031 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.031550 | orchestrator | 00:01:20.031 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.031570 | orchestrator | 00:01:20.031 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.031576 | orchestrator | 00:01:20.031 STDOUT terraform:  } 2025-03-27 00:01:20.031622 | orchestrator | 00:01:20.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-27 00:01:20.031664 | orchestrator | 00:01:20.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.031693 | orchestrator | 00:01:20.031 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.031713 | orchestrator | 00:01:20.031 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.031743 | orchestrator | 00:01:20.031 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.031772 | orchestrator | 00:01:20.031 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.031809 | orchestrator | 00:01:20.031 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-27 00:01:20.031838 | orchestrator | 00:01:20.031 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.031873 | orchestrator | 00:01:20.031 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.031888 | orchestrator | 00:01:20.031 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.031894 | orchestrator | 00:01:20.031 STDOUT terraform:  } 2025-03-27 00:01:20.031930 | orchestrator | 00:01:20.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-27 00:01:20.031972 | orchestrator | 00:01:20.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.032001 | orchestrator | 00:01:20.031 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.032021 | orchestrator | 00:01:20.031 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.032051 | orchestrator | 00:01:20.032 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.032080 | orchestrator | 00:01:20.032 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.032116 | orchestrator | 00:01:20.032 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-27 00:01:20.032146 | orchestrator | 00:01:20.032 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.032166 | orchestrator | 00:01:20.032 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.032186 | orchestrator | 00:01:20.032 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.032193 | orchestrator | 00:01:20.032 STDOUT terraform:  } 2025-03-27 00:01:20.032239 | orchestrator | 00:01:20.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-27 00:01:20.032281 | orchestrator | 00:01:20.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.032310 | orchestrator | 00:01:20.032 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.032326 | orchestrator | 00:01:20.032 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.032357 | orchestrator | 00:01:20.032 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.032388 | orchestrator | 00:01:20.032 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.032429 | orchestrator | 00:01:20.032 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-27 00:01:20.032459 | orchestrator | 00:01:20.032 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.032479 | orchestrator | 00:01:20.032 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.032499 | orchestrator | 00:01:20.032 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.032505 | orchestrator | 00:01:20.032 STDOUT terraform:  } 2025-03-27 00:01:20.032553 | orchestrator | 00:01:20.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-27 00:01:20.032594 | orchestrator | 00:01:20.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.032624 | orchestrator | 00:01:20.032 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.032643 | orchestrator | 00:01:20.032 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.032673 | orchestrator | 00:01:20.032 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.032703 | orchestrator | 00:01:20.032 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.032740 | orchestrator | 00:01:20.032 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-27 00:01:20.032769 | orchestrator | 00:01:20.032 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.032785 | orchestrator | 00:01:20.032 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.032806 | orchestrator | 00:01:20.032 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.032813 | orchestrator | 00:01:20.032 STDOUT terraform:  } 2025-03-27 00:01:20.032859 | orchestrator | 00:01:20.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-27 00:01:20.032900 | orchestrator | 00:01:20.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-27 00:01:20.032930 | orchestrator | 00:01:20.032 STDOUT terraform:  + attachment = (known after apply) 2025-03-27 00:01:20.032950 | orchestrator | 00:01:20.032 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.032979 | orchestrator | 00:01:20.032 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.033009 | orchestrator | 00:01:20.032 STDOUT terraform:  + metadata = (known after apply) 2025-03-27 00:01:20.033046 | orchestrator | 00:01:20.033 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-27 00:01:20.033076 | orchestrator | 00:01:20.033 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.033096 | orchestrator | 00:01:20.033 STDOUT terraform:  + size = 20 2025-03-27 00:01:20.033115 | orchestrator | 00:01:20.033 STDOUT terraform:  + volume_type = "ssd" 2025-03-27 00:01:20.033122 | orchestrator | 00:01:20.033 STDOUT terraform:  } 2025-03-27 00:01:20.033167 | orchestrator | 00:01:20.033 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-27 00:01:20.033209 | orchestrator | 00:01:20.033 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-27 00:01:20.033245 | orchestrator | 00:01:20.033 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.033279 | orchestrator | 00:01:20.033 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.033313 | orchestrator | 00:01:20.033 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.033346 | orchestrator | 00:01:20.033 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.033369 | orchestrator | 00:01:20.033 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.033390 | orchestrator | 00:01:20.033 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.033550 | orchestrator | 00:01:20.033 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.033623 | orchestrator | 00:01:20.033 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.033639 | orchestrator | 00:01:20.033 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-27 00:01:20.033652 | orchestrator | 00:01:20.033 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.033669 | orchestrator | 00:01:20.033 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.033709 | orchestrator | 00:01:20.033 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.033723 | orchestrator | 00:01:20.033 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.033736 | orchestrator | 00:01:20.033 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.033748 | orchestrator | 00:01:20.033 STDOUT terraform:  + name = "testbed-manager" 2025-03-27 00:01:20.033760 | orchestrator | 00:01:20.033 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.033776 | orchestrator | 00:01:20.033 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.033824 | orchestrator | 00:01:20.033 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.033855 | orchestrator | 00:01:20.033 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.033868 | orchestrator | 00:01:20.033 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.033890 | orchestrator | 00:01:20.033 STDOUT terraform:  + user_data = (known after apply) 2025-03-27 00:01:20.033904 | orchestrator | 00:01:20.033 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.033917 | orchestrator | 00:01:20.033 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.033929 | orchestrator | 00:01:20.033 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.033945 | orchestrator | 00:01:20.033 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.033958 | orchestrator | 00:01:20.033 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.033973 | orchestrator | 00:01:20.033 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.033989 | orchestrator | 00:01:20.033 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.034010 | orchestrator | 00:01:20.033 STDOUT terraform:  } 2025-03-27 00:01:20.036697 | orchestrator | 00:01:20.033 STDOUT terraform:  + network { 2025-03-27 00:01:20.036722 | orchestrator | 00:01:20.034 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.036738 | orchestrator | 00:01:20.036 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.036769 | orchestrator | 00:01:20.036 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.036798 | orchestrator | 00:01:20.036 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.036853 | orchestrator | 00:01:20.036 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.036871 | orchestrator | 00:01:20.036 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.036909 | orchestrator | 00:01:20.036 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.036926 | orchestrator | 00:01:20.036 STDOUT terraform:  } 2025-03-27 00:01:20.036990 | orchestrator | 00:01:20.036 STDOUT terraform:  } 2025-03-27 00:01:20.037010 | orchestrator | 00:01:20.036 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-27 00:01:20.037026 | orchestrator | 00:01:20.036 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.037133 | orchestrator | 00:01:20.037 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.037150 | orchestrator | 00:01:20.037 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.037165 | orchestrator | 00:01:20.037 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.037181 | orchestrator | 00:01:20.037 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.037196 | orchestrator | 00:01:20.037 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.037225 | orchestrator | 00:01:20.037 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.037265 | orchestrator | 00:01:20.037 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.037291 | orchestrator | 00:01:20.037 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.037327 | orchestrator | 00:01:20.037 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.037344 | orchestrator | 00:01:20.037 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.037394 | orchestrator | 00:01:20.037 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.037459 | orchestrator | 00:01:20.037 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.037476 | orchestrator | 00:01:20.037 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.037511 | orchestrator | 00:01:20.037 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.037528 | orchestrator | 00:01:20.037 STDOUT terraform:  + name = "testbed-node-0" 2025-03-27 00:01:20.037563 | orchestrator | 00:01:20.037 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.037599 | orchestrator | 00:01:20.037 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.037634 | orchestrator | 00:01:20.037 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.037651 | orchestrator | 00:01:20.037 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.037685 | orchestrator | 00:01:20.037 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.037743 | orchestrator | 00:01:20.037 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.037777 | orchestrator | 00:01:20.037 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.037794 | orchestrator | 00:01:20.037 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.037843 | orchestrator | 00:01:20.037 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.037860 | orchestrator | 00:01:20.037 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.037900 | orchestrator | 00:01:20.037 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.037916 | orchestrator | 00:01:20.037 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.037929 | orchestrator | 00:01:20.037 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.037944 | orchestrator | 00:01:20.037 STDOUT terraform:  } 2025-03-27 00:01:20.037957 | orchestrator | 00:01:20.037 STDOUT terraform:  + network { 2025-03-27 00:01:20.037972 | orchestrator | 00:01:20.037 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.037989 | orchestrator | 00:01:20.037 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.038040 | orchestrator | 00:01:20.037 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.038087 | orchestrator | 00:01:20.038 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.038132 | orchestrator | 00:01:20.038 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.038154 | orchestrator | 00:01:20.038 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.038167 | orchestrator | 00:01:20.038 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.038189 | orchestrator | 00:01:20.038 STDOUT terraform:  } 2025-03-27 00:01:20.038222 | orchestrator | 00:01:20.038 STDOUT terraform:  } 2025-03-27 00:01:20.038238 | orchestrator | 00:01:20.038 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-27 00:01:20.038253 | orchestrator | 00:01:20.038 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.038302 | orchestrator | 00:01:20.038 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.038320 | orchestrator | 00:01:20.038 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.038366 | orchestrator | 00:01:20.038 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.038384 | orchestrator | 00:01:20.038 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.038407 | orchestrator | 00:01:20.038 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.038439 | orchestrator | 00:01:20.038 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.038485 | orchestrator | 00:01:20.038 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.038502 | orchestrator | 00:01:20.038 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.038521 | orchestrator | 00:01:20.038 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.038537 | orchestrator | 00:01:20.038 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.038586 | orchestrator | 00:01:20.038 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.038602 | orchestrator | 00:01:20.038 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.038651 | orchestrator | 00:01:20.038 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.038668 | orchestrator | 00:01:20.038 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.038684 | orchestrator | 00:01:20.038 STDOUT terraform:  + name = "testbed-node-1" 2025-03-27 00:01:20.038719 | orchestrator | 00:01:20.038 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.038735 | orchestrator | 00:01:20.038 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.038782 | orchestrator | 00:01:20.038 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.038798 | orchestrator | 00:01:20.038 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.038821 | orchestrator | 00:01:20.038 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.038881 | orchestrator | 00:01:20.038 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.038905 | orchestrator | 00:01:20.038 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.038921 | orchestrator | 00:01:20.038 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.038937 | orchestrator | 00:01:20.038 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.038952 | orchestrator | 00:01:20.038 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.038987 | orchestrator | 00:01:20.038 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.039009 | orchestrator | 00:01:20.038 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.039046 | orchestrator | 00:01:20.038 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.039060 | orchestrator | 00:01:20.039 STDOUT terraform:  } 2025-03-27 00:01:20.039075 | orchestrator | 00:01:20.039 STDOUT terraform:  + network { 2025-03-27 00:01:20.039108 | orchestrator | 00:01:20.039 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.039124 | orchestrator | 00:01:20.039 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.039167 | orchestrator | 00:01:20.039 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.039183 | orchestrator | 00:01:20.039 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.039225 | orchestrator | 00:01:20.039 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.039241 | orchestrator | 00:01:20.039 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.039254 | orchestrator | 00:01:20.039 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.039269 | orchestrator | 00:01:20.039 STDOUT terraform:  } 2025-03-27 00:01:20.039315 | orchestrator | 00:01:20.039 STDOUT terraform:  } 2025-03-27 00:01:20.039332 | orchestrator | 00:01:20.039 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-27 00:01:20.039347 | orchestrator | 00:01:20.039 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.039383 | orchestrator | 00:01:20.039 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.039441 | orchestrator | 00:01:20.039 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.039490 | orchestrator | 00:01:20.039 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.039508 | orchestrator | 00:01:20.039 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.039594 | orchestrator | 00:01:20.039 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.039615 | orchestrator | 00:01:20.039 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.039626 | orchestrator | 00:01:20.039 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.039661 | orchestrator | 00:01:20.039 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.039668 | orchestrator | 00:01:20.039 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.039673 | orchestrator | 00:01:20.039 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.039680 | orchestrator | 00:01:20.039 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.039695 | orchestrator | 00:01:20.039 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.039726 | orchestrator | 00:01:20.039 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.039751 | orchestrator | 00:01:20.039 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.039782 | orchestrator | 00:01:20.039 STDOUT terraform:  + name = "testbed-node-2" 2025-03-27 00:01:20.039798 | orchestrator | 00:01:20.039 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.039836 | orchestrator | 00:01:20.039 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.039870 | orchestrator | 00:01:20.039 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.039892 | orchestrator | 00:01:20.039 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.039926 | orchestrator | 00:01:20.039 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.039975 | orchestrator | 00:01:20.039 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.039982 | orchestrator | 00:01:20.039 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.040011 | orchestrator | 00:01:20.039 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.040039 | orchestrator | 00:01:20.040 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.040067 | orchestrator | 00:01:20.040 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.040094 | orchestrator | 00:01:20.040 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.040123 | orchestrator | 00:01:20.040 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.040161 | orchestrator | 00:01:20.040 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.040167 | orchestrator | 00:01:20.040 STDOUT terraform:  } 2025-03-27 00:01:20.040183 | orchestrator | 00:01:20.040 STDOUT terraform:  + network { 2025-03-27 00:01:20.040202 | orchestrator | 00:01:20.040 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.040232 | orchestrator | 00:01:20.040 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.040261 | orchestrator | 00:01:20.040 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.040291 | orchestrator | 00:01:20.040 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.040322 | orchestrator | 00:01:20.040 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.040352 | orchestrator | 00:01:20.040 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.040383 | orchestrator | 00:01:20.040 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.040390 | orchestrator | 00:01:20.040 STDOUT terraform:  } 2025-03-27 00:01:20.040405 | orchestrator | 00:01:20.040 STDOUT terraform:  } 2025-03-27 00:01:20.040516 | orchestrator | 00:01:20.040 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-27 00:01:20.040549 | orchestrator | 00:01:20.040 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.040583 | orchestrator | 00:01:20.040 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.040618 | orchestrator | 00:01:20.040 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.040653 | orchestrator | 00:01:20.040 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.040687 | orchestrator | 00:01:20.040 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.040714 | orchestrator | 00:01:20.040 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.040730 | orchestrator | 00:01:20.040 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.040762 | orchestrator | 00:01:20.040 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.040797 | orchestrator | 00:01:20.040 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.040827 | orchestrator | 00:01:20.040 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.040847 | orchestrator | 00:01:20.040 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.040883 | orchestrator | 00:01:20.040 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.040918 | orchestrator | 00:01:20.040 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.040950 | orchestrator | 00:01:20.040 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.040975 | orchestrator | 00:01:20.040 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.041005 | orchestrator | 00:01:20.040 STDOUT terraform:  + name = "testbed-node-3" 2025-03-27 00:01:20.041029 | orchestrator | 00:01:20.040 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.041064 | orchestrator | 00:01:20.041 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.041097 | orchestrator | 00:01:20.041 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.041119 | orchestrator | 00:01:20.041 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.041153 | orchestrator | 00:01:20.041 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.041201 | orchestrator | 00:01:20.041 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.041221 | orchestrator | 00:01:20.041 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.041241 | orchestrator | 00:01:20.041 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.041269 | orchestrator | 00:01:20.041 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.041297 | orchestrator | 00:01:20.041 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.041324 | orchestrator | 00:01:20.041 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.041353 | orchestrator | 00:01:20.041 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.041391 | orchestrator | 00:01:20.041 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.041397 | orchestrator | 00:01:20.041 STDOUT terraform:  } 2025-03-27 00:01:20.041421 | orchestrator | 00:01:20.041 STDOUT terraform:  + network { 2025-03-27 00:01:20.041437 | orchestrator | 00:01:20.041 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.041466 | orchestrator | 00:01:20.041 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.041494 | orchestrator | 00:01:20.041 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.041525 | orchestrator | 00:01:20.041 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.041556 | orchestrator | 00:01:20.041 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.041587 | orchestrator | 00:01:20.041 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.041617 | orchestrator | 00:01:20.041 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.041624 | orchestrator | 00:01:20.041 STDOUT terraform:  } 2025-03-27 00:01:20.041640 | orchestrator | 00:01:20.041 STDOUT terraform:  } 2025-03-27 00:01:20.041679 | orchestrator | 00:01:20.041 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-27 00:01:20.041720 | orchestrator | 00:01:20.041 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.041754 | orchestrator | 00:01:20.041 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.041787 | orchestrator | 00:01:20.041 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.041821 | orchestrator | 00:01:20.041 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.041855 | orchestrator | 00:01:20.041 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.041878 | orchestrator | 00:01:20.041 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.041897 | orchestrator | 00:01:20.041 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.041931 | orchestrator | 00:01:20.041 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.041965 | orchestrator | 00:01:20.041 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.041993 | orchestrator | 00:01:20.041 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.042029 | orchestrator | 00:01:20.041 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.042062 | orchestrator | 00:01:20.042 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.042095 | orchestrator | 00:01:20.042 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.042130 | orchestrator | 00:01:20.042 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.042161 | orchestrator | 00:01:20.042 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.042192 | orchestrator | 00:01:20.042 STDOUT terraform:  + name = "testbed-node-4" 2025-03-27 00:01:20.042216 | orchestrator | 00:01:20.042 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.042250 | orchestrator | 00:01:20.042 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.042283 | orchestrator | 00:01:20.042 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.042306 | orchestrator | 00:01:20.042 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.042340 | orchestrator | 00:01:20.042 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.042389 | orchestrator | 00:01:20.042 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.042398 | orchestrator | 00:01:20.042 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.042509 | orchestrator | 00:01:20.042 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.042558 | orchestrator | 00:01:20.042 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.042573 | orchestrator | 00:01:20.042 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.042586 | orchestrator | 00:01:20.042 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.042602 | orchestrator | 00:01:20.042 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.042638 | orchestrator | 00:01:20.042 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.042653 | orchestrator | 00:01:20.042 STDOUT terraform:  } 2025-03-27 00:01:20.042665 | orchestrator | 00:01:20.042 STDOUT terraform:  + network { 2025-03-27 00:01:20.042678 | orchestrator | 00:01:20.042 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.042700 | orchestrator | 00:01:20.042 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.042713 | orchestrator | 00:01:20.042 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.042726 | orchestrator | 00:01:20.042 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.042742 | orchestrator | 00:01:20.042 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.042755 | orchestrator | 00:01:20.042 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.042771 | orchestrator | 00:01:20.042 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.042784 | orchestrator | 00:01:20.042 STDOUT terraform:  } 2025-03-27 00:01:20.042799 | orchestrator | 00:01:20.042 STDOUT terraform:  } 2025-03-27 00:01:20.042815 | orchestrator | 00:01:20.042 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-27 00:01:20.042870 | orchestrator | 00:01:20.042 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-27 00:01:20.042888 | orchestrator | 00:01:20.042 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-27 00:01:20.042934 | orchestrator | 00:01:20.042 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-27 00:01:20.042952 | orchestrator | 00:01:20.042 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-27 00:01:20.042999 | orchestrator | 00:01:20.042 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.043015 | orchestrator | 00:01:20.042 STDOUT terraform:  + availability_zone = "nova" 2025-03-27 00:01:20.043031 | orchestrator | 00:01:20.043 STDOUT terraform:  + config_drive = true 2025-03-27 00:01:20.043066 | orchestrator | 00:01:20.043 STDOUT terraform:  + created = (known after apply) 2025-03-27 00:01:20.043083 | orchestrator | 00:01:20.043 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-27 00:01:20.043131 | orchestrator | 00:01:20.043 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-27 00:01:20.043149 | orchestrator | 00:01:20.043 STDOUT terraform:  + force_delete = false 2025-03-27 00:01:20.043165 | orchestrator | 00:01:20.043 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.043210 | orchestrator | 00:01:20.043 STDOUT terraform:  + image_id = (known after apply) 2025-03-27 00:01:20.043232 | orchestrator | 00:01:20.043 STDOUT terraform:  + image_name = (known after apply) 2025-03-27 00:01:20.043248 | orchestrator | 00:01:20.043 STDOUT terraform:  + key_pair = "testbed" 2025-03-27 00:01:20.043284 | orchestrator | 00:01:20.043 STDOUT terraform:  + name = "testbed-node-5" 2025-03-27 00:01:20.043301 | orchestrator | 00:01:20.043 STDOUT terraform:  + power_state = "active" 2025-03-27 00:01:20.043336 | orchestrator | 00:01:20.043 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.043360 | orchestrator | 00:01:20.043 STDOUT terraform:  + security_groups = (known after apply) 2025-03-27 00:01:20.043376 | orchestrator | 00:01:20.043 STDOUT terraform:  + stop_before_destroy = false 2025-03-27 00:01:20.043438 | orchestrator | 00:01:20.043 STDOUT terraform:  + updated = (known after apply) 2025-03-27 00:01:20.043490 | orchestrator | 00:01:20.043 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-27 00:01:20.043533 | orchestrator | 00:01:20.043 STDOUT terraform:  + block_device { 2025-03-27 00:01:20.043550 | orchestrator | 00:01:20.043 STDOUT terraform:  + boot_index = 0 2025-03-27 00:01:20.043563 | orchestrator | 00:01:20.043 STDOUT terraform:  + delete_on_termination = false 2025-03-27 00:01:20.043579 | orchestrator | 00:01:20.043 STDOUT terraform:  + destination_type = "volume" 2025-03-27 00:01:20.043594 | orchestrator | 00:01:20.043 STDOUT terraform:  + multiattach = false 2025-03-27 00:01:20.043638 | orchestrator | 00:01:20.043 STDOUT terraform:  + source_type = "volume" 2025-03-27 00:01:20.043654 | orchestrator | 00:01:20.043 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.043670 | orchestrator | 00:01:20.043 STDOUT terraform:  } 2025-03-27 00:01:20.043683 | orchestrator | 00:01:20.043 STDOUT terraform:  + network { 2025-03-27 00:01:20.043699 | orchestrator | 00:01:20.043 STDOUT terraform:  + access_network = false 2025-03-27 00:01:20.043714 | orchestrator | 00:01:20.043 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-27 00:01:20.043749 | orchestrator | 00:01:20.043 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-27 00:01:20.043766 | orchestrator | 00:01:20.043 STDOUT terraform:  + mac = (known after apply) 2025-03-27 00:01:20.043808 | orchestrator | 00:01:20.043 STDOUT terraform:  + name = (known after apply) 2025-03-27 00:01:20.043825 | orchestrator | 00:01:20.043 STDOUT terraform:  + port = (known after apply) 2025-03-27 00:01:20.043869 | orchestrator | 00:01:20.043 STDOUT terraform:  + uuid = (known after apply) 2025-03-27 00:01:20.043882 | orchestrator | 00:01:20.043 STDOUT terraform:  } 2025-03-27 00:01:20.043897 | orchestrator | 00:01:20.043 STDOUT terraform:  } 2025-03-27 00:01:20.043918 | orchestrator | 00:01:20.043 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-27 00:01:20.043934 | orchestrator | 00:01:20.043 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-27 00:01:20.043969 | orchestrator | 00:01:20.043 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-27 00:01:20.043985 | orchestrator | 00:01:20.043 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.044007 | orchestrator | 00:01:20.043 STDOUT terraform:  + name = "testbed" 2025-03-27 00:01:20.044022 | orchestrator | 00:01:20.043 STDOUT terraform:  + private_key = (sensitive value) 2025-03-27 00:01:20.044067 | orchestrator | 00:01:20.044 STDOUT terraform:  + public_key = (known after apply) 2025-03-27 00:01:20.044109 | orchestrator | 00:01:20.044 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.044125 | orchestrator | 00:01:20.044 STDOUT terraform:  + user_id = (known after apply) 2025-03-27 00:01:20.044165 | orchestrator | 00:01:20.044 STDOUT terraform:  } 2025-03-27 00:01:20.044182 | orchestrator | 00:01:20.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-27 00:01:20.044198 | orchestrator | 00:01:20.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.044244 | orchestrator | 00:01:20.044 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.044260 | orchestrator | 00:01:20.044 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.044287 | orchestrator | 00:01:20.044 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.044331 | orchestrator | 00:01:20.044 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.044346 | orchestrator | 00:01:20.044 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.044361 | orchestrator | 00:01:20.044 STDOUT terraform:  } 2025-03-27 00:01:20.044406 | orchestrator | 00:01:20.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-27 00:01:20.044469 | orchestrator | 00:01:20.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.044503 | orchestrator | 00:01:20.044 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.044519 | orchestrator | 00:01:20.044 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.044551 | orchestrator | 00:01:20.044 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.044568 | orchestrator | 00:01:20.044 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.044585 | orchestrator | 00:01:20.044 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.044601 | orchestrator | 00:01:20.044 STDOUT terraform:  } 2025-03-27 00:01:20.044637 | orchestrator | 00:01:20.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-27 00:01:20.044682 | orchestrator | 00:01:20.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.044699 | orchestrator | 00:01:20.044 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.044742 | orchestrator | 00:01:20.044 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.044775 | orchestrator | 00:01:20.044 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.044791 | orchestrator | 00:01:20.044 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.044804 | orchestrator | 00:01:20.044 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.044826 | orchestrator | 00:01:20.044 STDOUT terraform:  } 2025-03-27 00:01:20.044871 | orchestrator | 00:01:20.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-27 00:01:20.044906 | orchestrator | 00:01:20.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.044923 | orchestrator | 00:01:20.044 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.044967 | orchestrator | 00:01:20.044 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.045000 | orchestrator | 00:01:20.044 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.045016 | orchestrator | 00:01:20.044 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.045029 | orchestrator | 00:01:20.044 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.045044 | orchestrator | 00:01:20.045 STDOUT terraform:  } 2025-03-27 00:01:20.045080 | orchestrator | 00:01:20.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-27 00:01:20.045129 | orchestrator | 00:01:20.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.045146 | orchestrator | 00:01:20.045 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.045181 | orchestrator | 00:01:20.045 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.045197 | orchestrator | 00:01:20.045 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.045213 | orchestrator | 00:01:20.045 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.045247 | orchestrator | 00:01:20.045 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.045304 | orchestrator | 00:01:20.045 STDOUT terraform:  } 2025-03-27 00:01:20.045321 | orchestrator | 00:01:20.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-27 00:01:20.045337 | orchestrator | 00:01:20.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.045372 | orchestrator | 00:01:20.045 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.045389 | orchestrator | 00:01:20.045 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.045445 | orchestrator | 00:01:20.045 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.045463 | orchestrator | 00:01:20.045 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.045479 | orchestrator | 00:01:20.045 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.045494 | orchestrator | 00:01:20.045 STDOUT terraform:  } 2025-03-27 00:01:20.045586 | orchestrator | 00:01:20.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-27 00:01:20.045607 | orchestrator | 00:01:20.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.045614 | orchestrator | 00:01:20.045 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.045636 | orchestrator | 00:01:20.045 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.045658 | orchestrator | 00:01:20.045 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.045686 | orchestrator | 00:01:20.045 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.045713 | orchestrator | 00:01:20.045 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.045720 | orchestrator | 00:01:20.045 STDOUT terraform:  } 2025-03-27 00:01:20.045770 | orchestrator | 00:01:20.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-27 00:01:20.045817 | orchestrator | 00:01:20.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.045845 | orchestrator | 00:01:20.045 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.045873 | orchestrator | 00:01:20.045 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.045899 | orchestrator | 00:01:20.045 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.045929 | orchestrator | 00:01:20.045 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.045954 | orchestrator | 00:01:20.045 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.045961 | orchestrator | 00:01:20.045 STDOUT terraform:  } 2025-03-27 00:01:20.046011 | orchestrator | 00:01:20.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-27 00:01:20.046069 | orchestrator | 00:01:20.046 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.046097 | orchestrator | 00:01:20.046 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.046125 | orchestrator | 00:01:20.046 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.046152 | orchestrator | 00:01:20.046 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.046179 | orchestrator | 00:01:20.046 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.046208 | orchestrator | 00:01:20.046 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.046221 | orchestrator | 00:01:20.046 STDOUT terraform:  } 2025-03-27 00:01:20.046270 | orchestrator | 00:01:20.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-27 00:01:20.046316 | orchestrator | 00:01:20.046 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.046344 | orchestrator | 00:01:20.046 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.046372 | orchestrator | 00:01:20.046 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.046398 | orchestrator | 00:01:20.046 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.046432 | orchestrator | 00:01:20.046 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.046459 | orchestrator | 00:01:20.046 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.046466 | orchestrator | 00:01:20.046 STDOUT terraform:  } 2025-03-27 00:01:20.046518 | orchestrator | 00:01:20.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-27 00:01:20.046565 | orchestrator | 00:01:20.046 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.046593 | orchestrator | 00:01:20.046 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.046620 | orchestrator | 00:01:20.046 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.046647 | orchestrator | 00:01:20.046 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.046675 | orchestrator | 00:01:20.046 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.046701 | orchestrator | 00:01:20.046 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.046715 | orchestrator | 00:01:20.046 STDOUT terraform:  } 2025-03-27 00:01:20.046763 | orchestrator | 00:01:20.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-27 00:01:20.046809 | orchestrator | 00:01:20.046 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.046836 | orchestrator | 00:01:20.046 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.046865 | orchestrator | 00:01:20.046 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.046892 | orchestrator | 00:01:20.046 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.046919 | orchestrator | 00:01:20.046 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.046946 | orchestrator | 00:01:20.046 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.046953 | orchestrator | 00:01:20.046 STDOUT terraform:  } 2025-03-27 00:01:20.047004 | orchestrator | 00:01:20.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-27 00:01:20.047050 | orchestrator | 00:01:20.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.047078 | orchestrator | 00:01:20.047 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.047106 | orchestrator | 00:01:20.047 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.047132 | orchestrator | 00:01:20.047 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.047173 | orchestrator | 00:01:20.047 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.047189 | orchestrator | 00:01:20.047 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.047195 | orchestrator | 00:01:20.047 STDOUT terraform:  } 2025-03-27 00:01:20.047244 | orchestrator | 00:01:20.047 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-27 00:01:20.047291 | orchestrator | 00:01:20.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.047319 | orchestrator | 00:01:20.047 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.047347 | orchestrator | 00:01:20.047 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.047373 | orchestrator | 00:01:20.047 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.047400 | orchestrator | 00:01:20.047 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.047435 | orchestrator | 00:01:20.047 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.047446 | orchestrator | 00:01:20.047 STDOUT terraform:  } 2025-03-27 00:01:20.047493 | orchestrator | 00:01:20.047 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-27 00:01:20.047541 | orchestrator | 00:01:20.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.047569 | orchestrator | 00:01:20.047 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.047598 | orchestrator | 00:01:20.047 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.047625 | orchestrator | 00:01:20.047 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.047652 | orchestrator | 00:01:20.047 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.047678 | orchestrator | 00:01:20.047 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.047687 | orchestrator | 00:01:20.047 STDOUT terraform:  } 2025-03-27 00:01:20.047737 | orchestrator | 00:01:20.047 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-27 00:01:20.047784 | orchestrator | 00:01:20.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.047811 | orchestrator | 00:01:20.047 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.047839 | orchestrator | 00:01:20.047 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.047865 | orchestrator | 00:01:20.047 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.047893 | orchestrator | 00:01:20.047 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.047920 | orchestrator | 00:01:20.047 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.047926 | orchestrator | 00:01:20.047 STDOUT terraform:  } 2025-03-27 00:01:20.047978 | orchestrator | 00:01:20.047 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-27 00:01:20.048024 | orchestrator | 00:01:20.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.048051 | orchestrator | 00:01:20.048 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.048080 | orchestrator | 00:01:20.048 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.048107 | orchestrator | 00:01:20.048 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.048134 | orchestrator | 00:01:20.048 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.048161 | orchestrator | 00:01:20.048 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.048168 | orchestrator | 00:01:20.048 STDOUT terraform:  } 2025-03-27 00:01:20.048217 | orchestrator | 00:01:20.048 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-27 00:01:20.048265 | orchestrator | 00:01:20.048 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-27 00:01:20.048292 | orchestrator | 00:01:20.048 STDOUT terraform:  + device = (known after apply) 2025-03-27 00:01:20.048320 | orchestrator | 00:01:20.048 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.048347 | orchestrator | 00:01:20.048 STDOUT terraform:  + instance_id = (known after apply) 2025-03-27 00:01:20.048375 | orchestrator | 00:01:20.048 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.048401 | orchestrator | 00:01:20.048 STDOUT terraform:  + volume_id = (known after apply) 2025-03-27 00:01:20.048407 | orchestrator | 00:01:20.048 STDOUT terraform:  } 2025-03-27 00:01:20.048489 | orchestrator | 00:01:20.048 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-27 00:01:20.048535 | orchestrator | 00:01:20.048 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-27 00:01:20.048562 | orchestrator | 00:01:20.048 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-27 00:01:20.048590 | orchestrator | 00:01:20.048 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-27 00:01:20.048617 | orchestrator | 00:01:20.048 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.048645 | orchestrator | 00:01:20.048 STDOUT terraform:  + port_id = (known after apply) 2025-03-27 00:01:20.048674 | orchestrator | 00:01:20.048 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.048681 | orchestrator | 00:01:20.048 STDOUT terraform:  } 2025-03-27 00:01:20.048726 | orchestrator | 00:01:20.048 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-27 00:01:20.048772 | orchestrator | 00:01:20.048 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-27 00:01:20.048797 | orchestrator | 00:01:20.048 STDOUT terraform:  + address = (known after apply) 2025-03-27 00:01:20.048822 | orchestrator | 00:01:20.048 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.048845 | orchestrator | 00:01:20.048 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-27 00:01:20.048869 | orchestrator | 00:01:20.048 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.048893 | orchestrator | 00:01:20.048 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-27 00:01:20.048917 | orchestrator | 00:01:20.048 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.048939 | orchestrator | 00:01:20.048 STDOUT terraform:  + pool = "public" 2025-03-27 00:01:20.048963 | orchestrator | 00:01:20.048 STDOUT terraform:  + port_id = (known after apply) 2025-03-27 00:01:20.048987 | orchestrator | 00:01:20.048 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.049011 | orchestrator | 00:01:20.048 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.049035 | orchestrator | 00:01:20.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.049042 | orchestrator | 00:01:20.049 STDOUT terraform:  } 2025-03-27 00:01:20.049087 | orchestrator | 00:01:20.049 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-27 00:01:20.049131 | orchestrator | 00:01:20.049 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-27 00:01:20.049167 | orchestrator | 00:01:20.049 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.049202 | orchestrator | 00:01:20.049 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.049221 | orchestrator | 00:01:20.049 STDOUT terraform:  + availability_zone_hints = [ 2025-03-27 00:01:20.049228 | orchestrator | 00:01:20.049 STDOUT terraform:  + "nova", 2025-03-27 00:01:20.049243 | orchestrator | 00:01:20.049 STDOUT terraform:  ] 2025-03-27 00:01:20.049278 | orchestrator | 00:01:20.049 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-27 00:01:20.049314 | orchestrator | 00:01:20.049 STDOUT terraform:  + external = (known after apply) 2025-03-27 00:01:20.049351 | orchestrator | 00:01:20.049 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.049388 | orchestrator | 00:01:20.049 STDOUT terraform:  + mtu = (known after apply) 2025-03-27 00:01:20.049433 | orchestrator | 00:01:20.049 STDOUT terraform:  + name = "net-testbed-management" 2025-03-27 00:01:20.049467 | orchestrator | 00:01:20.049 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.049503 | orchestrator | 00:01:20.049 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.049539 | orchestrator | 00:01:20.049 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.049576 | orchestrator | 00:01:20.049 STDOUT terraform:  + shared = (known after apply) 2025-03-27 00:01:20.049611 | orchestrator | 00:01:20.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.049647 | orchestrator | 00:01:20.049 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-27 00:01:20.049669 | orchestrator | 00:01:20.049 STDOUT terraform:  + segments (known after apply) 2025-03-27 00:01:20.049676 | orchestrator | 00:01:20.049 STDOUT terraform:  } 2025-03-27 00:01:20.049724 | orchestrator | 00:01:20.049 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-27 00:01:20.049769 | orchestrator | 00:01:20.049 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-27 00:01:20.049804 | orchestrator | 00:01:20.049 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.049839 | orchestrator | 00:01:20.049 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.049873 | orchestrator | 00:01:20.049 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.049907 | orchestrator | 00:01:20.049 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.049943 | orchestrator | 00:01:20.049 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.049978 | orchestrator | 00:01:20.049 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.050022 | orchestrator | 00:01:20.049 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.050057 | orchestrator | 00:01:20.050 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.050094 | orchestrator | 00:01:20.050 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.050128 | orchestrator | 00:01:20.050 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.050165 | orchestrator | 00:01:20.050 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.050204 | orchestrator | 00:01:20.050 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.050239 | orchestrator | 00:01:20.050 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.050275 | orchestrator | 00:01:20.050 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.050309 | orchestrator | 00:01:20.050 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.050344 | orchestrator | 00:01:20.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.050364 | orchestrator | 00:01:20.050 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.050393 | orchestrator | 00:01:20.050 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.050399 | orchestrator | 00:01:20.050 STDOUT terraform:  } 2025-03-27 00:01:20.050439 | orchestrator | 00:01:20.050 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.050466 | orchestrator | 00:01:20.050 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.050473 | orchestrator | 00:01:20.050 STDOUT terraform:  } 2025-03-27 00:01:20.050499 | orchestrator | 00:01:20.050 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.050506 | orchestrator | 00:01:20.050 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.050534 | orchestrator | 00:01:20.050 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-27 00:01:20.050562 | orchestrator | 00:01:20.050 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.050569 | orchestrator | 00:01:20.050 STDOUT terraform:  } 2025-03-27 00:01:20.050585 | orchestrator | 00:01:20.050 STDOUT terraform:  } 2025-03-27 00:01:20.050628 | orchestrator | 00:01:20.050 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-27 00:01:20.050671 | orchestrator | 00:01:20.050 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.050706 | orchestrator | 00:01:20.050 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.050741 | orchestrator | 00:01:20.050 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.050776 | orchestrator | 00:01:20.050 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.050811 | orchestrator | 00:01:20.050 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.050847 | orchestrator | 00:01:20.050 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.050881 | orchestrator | 00:01:20.050 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.050916 | orchestrator | 00:01:20.050 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.050951 | orchestrator | 00:01:20.050 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.050987 | orchestrator | 00:01:20.050 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.051023 | orchestrator | 00:01:20.050 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.051058 | orchestrator | 00:01:20.051 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.051092 | orchestrator | 00:01:20.051 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.051127 | orchestrator | 00:01:20.051 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.051163 | orchestrator | 00:01:20.051 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.051197 | orchestrator | 00:01:20.051 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.051231 | orchestrator | 00:01:20.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.051251 | orchestrator | 00:01:20.051 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.051279 | orchestrator | 00:01:20.051 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.051285 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051308 | orchestrator | 00:01:20.051 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.051337 | orchestrator | 00:01:20.051 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.051344 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051366 | orchestrator | 00:01:20.051 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.051394 | orchestrator | 00:01:20.051 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.051401 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051430 | orchestrator | 00:01:20.051 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.051457 | orchestrator | 00:01:20.051 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.051464 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051492 | orchestrator | 00:01:20.051 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.051499 | orchestrator | 00:01:20.051 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.051525 | orchestrator | 00:01:20.051 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-27 00:01:20.051554 | orchestrator | 00:01:20.051 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.051560 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051580 | orchestrator | 00:01:20.051 STDOUT terraform:  } 2025-03-27 00:01:20.051624 | orchestrator | 00:01:20.051 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-27 00:01:20.051669 | orchestrator | 00:01:20.051 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.051704 | orchestrator | 00:01:20.051 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.051740 | orchestrator | 00:01:20.051 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.051773 | orchestrator | 00:01:20.051 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.051810 | orchestrator | 00:01:20.051 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.051844 | orchestrator | 00:01:20.051 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.051879 | orchestrator | 00:01:20.051 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.051914 | orchestrator | 00:01:20.051 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.051950 | orchestrator | 00:01:20.051 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.051985 | orchestrator | 00:01:20.051 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.052020 | orchestrator | 00:01:20.051 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.052057 | orchestrator | 00:01:20.052 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.052090 | orchestrator | 00:01:20.052 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.052125 | orchestrator | 00:01:20.052 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.052161 | orchestrator | 00:01:20.052 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.052195 | orchestrator | 00:01:20.052 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.052230 | orchestrator | 00:01:20.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.052246 | orchestrator | 00:01:20.052 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.052273 | orchestrator | 00:01:20.052 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.052280 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052303 | orchestrator | 00:01:20.052 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.052330 | orchestrator | 00:01:20.052 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.052337 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052359 | orchestrator | 00:01:20.052 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.052386 | orchestrator | 00:01:20.052 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.052393 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052431 | orchestrator | 00:01:20.052 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.052439 | orchestrator | 00:01:20.052 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.052455 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052478 | orchestrator | 00:01:20.052 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.052485 | orchestrator | 00:01:20.052 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.052513 | orchestrator | 00:01:20.052 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-27 00:01:20.052542 | orchestrator | 00:01:20.052 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.052549 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052564 | orchestrator | 00:01:20.052 STDOUT terraform:  } 2025-03-27 00:01:20.052607 | orchestrator | 00:01:20.052 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-27 00:01:20.052650 | orchestrator | 00:01:20.052 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.052685 | orchestrator | 00:01:20.052 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.052720 | orchestrator | 00:01:20.052 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.052754 | orchestrator | 00:01:20.052 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.052789 | orchestrator | 00:01:20.052 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.052824 | orchestrator | 00:01:20.052 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.052859 | orchestrator | 00:01:20.052 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.052894 | orchestrator | 00:01:20.052 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.052929 | orchestrator | 00:01:20.052 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.052965 | orchestrator | 00:01:20.052 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.052998 | orchestrator | 00:01:20.052 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.053034 | orchestrator | 00:01:20.052 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.053069 | orchestrator | 00:01:20.053 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.053104 | orchestrator | 00:01:20.053 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.053141 | orchestrator | 00:01:20.053 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.053173 | orchestrator | 00:01:20.053 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.053208 | orchestrator | 00:01:20.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.053224 | orchestrator | 00:01:20.053 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.053251 | orchestrator | 00:01:20.053 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.053258 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053280 | orchestrator | 00:01:20.053 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.053308 | orchestrator | 00:01:20.053 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.053315 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053337 | orchestrator | 00:01:20.053 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.053364 | orchestrator | 00:01:20.053 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.053370 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053393 | orchestrator | 00:01:20.053 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.053427 | orchestrator | 00:01:20.053 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.053435 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053459 | orchestrator | 00:01:20.053 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.053466 | orchestrator | 00:01:20.053 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.053495 | orchestrator | 00:01:20.053 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-27 00:01:20.053523 | orchestrator | 00:01:20.053 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.053530 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053549 | orchestrator | 00:01:20.053 STDOUT terraform:  } 2025-03-27 00:01:20.053590 | orchestrator | 00:01:20.053 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-27 00:01:20.053634 | orchestrator | 00:01:20.053 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.053669 | orchestrator | 00:01:20.053 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.053704 | orchestrator | 00:01:20.053 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.053737 | orchestrator | 00:01:20.053 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.053774 | orchestrator | 00:01:20.053 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.053808 | orchestrator | 00:01:20.053 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.053843 | orchestrator | 00:01:20.053 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.053878 | orchestrator | 00:01:20.053 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.053914 | orchestrator | 00:01:20.053 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.053950 | orchestrator | 00:01:20.053 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.053984 | orchestrator | 00:01:20.053 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.054026 | orchestrator | 00:01:20.053 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.054064 | orchestrator | 00:01:20.054 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.054100 | orchestrator | 00:01:20.054 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.054135 | orchestrator | 00:01:20.054 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.054170 | orchestrator | 00:01:20.054 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.054205 | orchestrator | 00:01:20.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.054224 | orchestrator | 00:01:20.054 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.054254 | orchestrator | 00:01:20.054 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.054261 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054283 | orchestrator | 00:01:20.054 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.054311 | orchestrator | 00:01:20.054 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.054317 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054340 | orchestrator | 00:01:20.054 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.054367 | orchestrator | 00:01:20.054 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.054382 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054401 | orchestrator | 00:01:20.054 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.054444 | orchestrator | 00:01:20.054 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.054451 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054477 | orchestrator | 00:01:20.054 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.054492 | orchestrator | 00:01:20.054 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.054515 | orchestrator | 00:01:20.054 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-27 00:01:20.054544 | orchestrator | 00:01:20.054 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.054558 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054564 | orchestrator | 00:01:20.054 STDOUT terraform:  } 2025-03-27 00:01:20.054611 | orchestrator | 00:01:20.054 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-27 00:01:20.054656 | orchestrator | 00:01:20.054 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.054691 | orchestrator | 00:01:20.054 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.054726 | orchestrator | 00:01:20.054 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.054760 | orchestrator | 00:01:20.054 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.054796 | orchestrator | 00:01:20.054 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.054831 | orchestrator | 00:01:20.054 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.054866 | orchestrator | 00:01:20.054 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.054900 | orchestrator | 00:01:20.054 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.054938 | orchestrator | 00:01:20.054 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.054971 | orchestrator | 00:01:20.054 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.055006 | orchestrator | 00:01:20.054 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.055042 | orchestrator | 00:01:20.055 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.055077 | orchestrator | 00:01:20.055 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.055111 | orchestrator | 00:01:20.055 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.055146 | orchestrator | 00:01:20.055 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.055181 | orchestrator | 00:01:20.055 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.055216 | orchestrator | 00:01:20.055 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.055235 | orchestrator | 00:01:20.055 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.055263 | orchestrator | 00:01:20.055 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.055272 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055294 | orchestrator | 00:01:20.055 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.055320 | orchestrator | 00:01:20.055 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.055327 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055348 | orchestrator | 00:01:20.055 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.055375 | orchestrator | 00:01:20.055 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.055382 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055404 | orchestrator | 00:01:20.055 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.055437 | orchestrator | 00:01:20.055 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.055444 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055470 | orchestrator | 00:01:20.055 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.055486 | orchestrator | 00:01:20.055 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.055508 | orchestrator | 00:01:20.055 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-27 00:01:20.055537 | orchestrator | 00:01:20.055 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.055544 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055559 | orchestrator | 00:01:20.055 STDOUT terraform:  } 2025-03-27 00:01:20.055604 | orchestrator | 00:01:20.055 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-27 00:01:20.055648 | orchestrator | 00:01:20.055 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-27 00:01:20.055684 | orchestrator | 00:01:20.055 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.055719 | orchestrator | 00:01:20.055 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-27 00:01:20.055753 | orchestrator | 00:01:20.055 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-27 00:01:20.055789 | orchestrator | 00:01:20.055 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.055824 | orchestrator | 00:01:20.055 STDOUT terraform:  + device_id = (known after apply) 2025-03-27 00:01:20.055859 | orchestrator | 00:01:20.055 STDOUT terraform:  + device_owner = (known after apply) 2025-03-27 00:01:20.055893 | orchestrator | 00:01:20.055 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-27 00:01:20.056349 | orchestrator | 00:01:20.055 STDOUT terraform:  + dns_name = (known after apply) 2025-03-27 00:01:20.056384 | orchestrator | 00:01:20.056 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.056444 | orchestrator | 00:01:20.056 STDOUT terraform:  + mac_address = (known after apply) 2025-03-27 00:01:20.056475 | orchestrator | 00:01:20.056 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.056510 | orchestrator | 00:01:20.056 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-27 00:01:20.056545 | orchestrator | 00:01:20.056 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-27 00:01:20.056580 | orchestrator | 00:01:20.056 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.056614 | orchestrator | 00:01:20.056 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-27 00:01:20.056650 | orchestrator | 00:01:20.056 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.056670 | orchestrator | 00:01:20.056 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.056698 | orchestrator | 00:01:20.056 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-27 00:01:20.056705 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.056727 | orchestrator | 00:01:20.056 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.056755 | orchestrator | 00:01:20.056 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-27 00:01:20.056762 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.056784 | orchestrator | 00:01:20.056 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.056811 | orchestrator | 00:01:20.056 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-27 00:01:20.056818 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.056839 | orchestrator | 00:01:20.056 STDOUT terraform:  + allowed_address_pairs { 2025-03-27 00:01:20.056867 | orchestrator | 00:01:20.056 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-27 00:01:20.056873 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.056899 | orchestrator | 00:01:20.056 STDOUT terraform:  + binding (known after apply) 2025-03-27 00:01:20.056914 | orchestrator | 00:01:20.056 STDOUT terraform:  + fixed_ip { 2025-03-27 00:01:20.056938 | orchestrator | 00:01:20.056 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-27 00:01:20.056969 | orchestrator | 00:01:20.056 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.056976 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.056991 | orchestrator | 00:01:20.056 STDOUT terraform:  } 2025-03-27 00:01:20.057037 | orchestrator | 00:01:20.056 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-27 00:01:20.057085 | orchestrator | 00:01:20.057 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-27 00:01:20.057102 | orchestrator | 00:01:20.057 STDOUT terraform:  + force_destroy = false 2025-03-27 00:01:20.057130 | orchestrator | 00:01:20.057 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.057159 | orchestrator | 00:01:20.057 STDOUT terraform:  + port_id = (known after apply) 2025-03-27 00:01:20.057187 | orchestrator | 00:01:20.057 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.057214 | orchestrator | 00:01:20.057 STDOUT terraform:  + router_id = (known after apply) 2025-03-27 00:01:20.057252 | orchestrator | 00:01:20.057 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-27 00:01:20.057273 | orchestrator | 00:01:20.057 STDOUT terraform:  } 2025-03-27 00:01:20.057303 | orchestrator | 00:01:20.057 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-27 00:01:20.057338 | orchestrator | 00:01:20.057 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-27 00:01:20.057374 | orchestrator | 00:01:20.057 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-27 00:01:20.057418 | orchestrator | 00:01:20.057 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.057439 | orchestrator | 00:01:20.057 STDOUT terraform:  + availability_zone_hints = [ 2025-03-27 00:01:20.057446 | orchestrator | 00:01:20.057 STDOUT terraform:  + "nova", 2025-03-27 00:01:20.057461 | orchestrator | 00:01:20.057 STDOUT terraform:  ] 2025-03-27 00:01:20.057497 | orchestrator | 00:01:20.057 STDOUT terraform:  + distributed = (known after apply) 2025-03-27 00:01:20.057532 | orchestrator | 00:01:20.057 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-27 00:01:20.057582 | orchestrator | 00:01:20.057 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-27 00:01:20.057619 | orchestrator | 00:01:20.057 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.057648 | orchestrator | 00:01:20.057 STDOUT terraform:  + name = "testbed" 2025-03-27 00:01:20.057684 | orchestrator | 00:01:20.057 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.057719 | orchestrator | 00:01:20.057 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.057749 | orchestrator | 00:01:20.057 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-27 00:01:20.057755 | orchestrator | 00:01:20.057 STDOUT terraform:  } 2025-03-27 00:01:20.057809 | orchestrator | 00:01:20.057 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-27 00:01:20.057859 | orchestrator | 00:01:20.057 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-27 00:01:20.057880 | orchestrator | 00:01:20.057 STDOUT terraform:  + description = "ssh" 2025-03-27 00:01:20.057903 | orchestrator | 00:01:20.057 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.057922 | orchestrator | 00:01:20.057 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.057953 | orchestrator | 00:01:20.057 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.057969 | orchestrator | 00:01:20.057 STDOUT terraform:  + port_range_max = 22 2025-03-27 00:01:20.057984 | orchestrator | 00:01:20.057 STDOUT terraform:  + port_range_min = 22 2025-03-27 00:01:20.058005 | orchestrator | 00:01:20.057 STDOUT terraform:  + protocol = "tcp" 2025-03-27 00:01:20.058051 | orchestrator | 00:01:20.057 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.058080 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.058105 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.058133 | orchestrator | 00:01:20.058 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.058163 | orchestrator | 00:01:20.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.058169 | orchestrator | 00:01:20.058 STDOUT terraform:  } 2025-03-27 00:01:20.058224 | orchestrator | 00:01:20.058 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-27 00:01:20.058275 | orchestrator | 00:01:20.058 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-27 00:01:20.058299 | orchestrator | 00:01:20.058 STDOUT terraform:  + description = "wireguard" 2025-03-27 00:01:20.058322 | orchestrator | 00:01:20.058 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.058342 | orchestrator | 00:01:20.058 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.058373 | orchestrator | 00:01:20.058 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.058393 | orchestrator | 00:01:20.058 STDOUT terraform:  + port_range_max = 51820 2025-03-27 00:01:20.058420 | orchestrator | 00:01:20.058 STDOUT terraform:  + port_range_min = 51820 2025-03-27 00:01:20.058436 | orchestrator | 00:01:20.058 STDOUT terraform:  + protocol = "udp" 2025-03-27 00:01:20.058467 | orchestrator | 00:01:20.058 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.058496 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.058526 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.058556 | orchestrator | 00:01:20.058 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.058585 | orchestrator | 00:01:20.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.058591 | orchestrator | 00:01:20.058 STDOUT terraform:  } 2025-03-27 00:01:20.058647 | orchestrator | 00:01:20.058 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-27 00:01:20.058697 | orchestrator | 00:01:20.058 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-27 00:01:20.058721 | orchestrator | 00:01:20.058 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.058741 | orchestrator | 00:01:20.058 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.058772 | orchestrator | 00:01:20.058 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.058803 | orchestrator | 00:01:20.058 STDOUT terraform:  + protocol = "tcp" 2025-03-27 00:01:20.058833 | orchestrator | 00:01:20.058 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.058862 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.058891 | orchestrator | 00:01:20.058 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-27 00:01:20.058920 | orchestrator | 00:01:20.058 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.058949 | orchestrator | 00:01:20.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.058955 | orchestrator | 00:01:20.058 STDOUT terraform:  } 2025-03-27 00:01:20.059010 | orchestrator | 00:01:20.058 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-27 00:01:20.059061 | orchestrator | 00:01:20.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-27 00:01:20.059085 | orchestrator | 00:01:20.059 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.059105 | orchestrator | 00:01:20.059 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.059135 | orchestrator | 00:01:20.059 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.059155 | orchestrator | 00:01:20.059 STDOUT terraform:  + protocol = "udp" 2025-03-27 00:01:20.059185 | orchestrator | 00:01:20.059 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.059213 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.059243 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-27 00:01:20.059270 | orchestrator | 00:01:20.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.059300 | orchestrator | 00:01:20.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.059306 | orchestrator | 00:01:20.059 STDOUT terraform:  } 2025-03-27 00:01:20.059362 | orchestrator | 00:01:20.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-27 00:01:20.059421 | orchestrator | 00:01:20.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-27 00:01:20.059528 | orchestrator | 00:01:20.059 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.059576 | orchestrator | 00:01:20.059 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.059593 | orchestrator | 00:01:20.059 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.059608 | orchestrator | 00:01:20.059 STDOUT terraform:  + protocol = "icmp" 2025-03-27 00:01:20.059625 | orchestrator | 00:01:20.059 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.059637 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.059650 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.059662 | orchestrator | 00:01:20.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.059677 | orchestrator | 00:01:20.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.059690 | orchestrator | 00:01:20.059 STDOUT terraform:  } 2025-03-27 00:01:20.059707 | orchestrator | 00:01:20.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-27 00:01:20.059745 | orchestrator | 00:01:20.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-27 00:01:20.059763 | orchestrator | 00:01:20.059 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.059779 | orchestrator | 00:01:20.059 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.059824 | orchestrator | 00:01:20.059 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.059865 | orchestrator | 00:01:20.059 STDOUT terraform:  + protocol = "tcp" 2025-03-27 00:01:20.059883 | orchestrator | 00:01:20.059 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.059896 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.059922 | orchestrator | 00:01:20.059 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.059955 | orchestrator | 00:01:20.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.059971 | orchestrator | 00:01:20.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.060014 | orchestrator | 00:01:20.059 STDOUT terraform:  } 2025-03-27 00:01:20.060031 | orchestrator | 00:01:20.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-27 00:01:20.060075 | orchestrator | 00:01:20.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-27 00:01:20.060089 | orchestrator | 00:01:20.060 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.060104 | orchestrator | 00:01:20.060 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.060120 | orchestrator | 00:01:20.060 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.060135 | orchestrator | 00:01:20.060 STDOUT terraform:  + protocol = "udp" 2025-03-27 00:01:20.060170 | orchestrator | 00:01:20.060 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.060187 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.060231 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.060248 | orchestrator | 00:01:20.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.060263 | orchestrator | 00:01:20.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.060278 | orchestrator | 00:01:20.060 STDOUT terraform:  } 2025-03-27 00:01:20.060332 | orchestrator | 00:01:20.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-27 00:01:20.060383 | orchestrator | 00:01:20.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-27 00:01:20.060400 | orchestrator | 00:01:20.060 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.060435 | orchestrator | 00:01:20.060 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.060451 | orchestrator | 00:01:20.060 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.060466 | orchestrator | 00:01:20.060 STDOUT terraform:  + protocol = "icmp" 2025-03-27 00:01:20.060482 | orchestrator | 00:01:20.060 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.060517 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.060533 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.060568 | orchestrator | 00:01:20.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.060584 | orchestrator | 00:01:20.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.060599 | orchestrator | 00:01:20.060 STDOUT terraform:  } 2025-03-27 00:01:20.060651 | orchestrator | 00:01:20.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-27 00:01:20.060694 | orchestrator | 00:01:20.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-27 00:01:20.060711 | orchestrator | 00:01:20.060 STDOUT terraform:  + description = "vrrp" 2025-03-27 00:01:20.060726 | orchestrator | 00:01:20.060 STDOUT terraform:  + direction = "ingress" 2025-03-27 00:01:20.060741 | orchestrator | 00:01:20.060 STDOUT terraform:  + ethertype = "IPv4" 2025-03-27 00:01:20.060776 | orchestrator | 00:01:20.060 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.060819 | orchestrator | 00:01:20.060 STDOUT terraform:  + protocol = "112" 2025-03-27 00:01:20.060836 | orchestrator | 00:01:20.060 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.060882 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-27 00:01:20.060900 | orchestrator | 00:01:20.060 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-27 00:01:20.060913 | orchestrator | 00:01:20.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-27 00:01:20.060928 | orchestrator | 00:01:20.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.060976 | orchestrator | 00:01:20.060 STDOUT terraform:  } 2025-03-27 00:01:20.060993 | orchestrator | 00:01:20.060 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-27 00:01:20.061008 | orchestrator | 00:01:20.060 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-27 00:01:20.061044 | orchestrator | 00:01:20.061 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.061071 | orchestrator | 00:01:20.061 STDOUT terraform:  + description = "management security group" 2025-03-27 00:01:20.061106 | orchestrator | 00:01:20.061 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.061122 | orchestrator | 00:01:20.061 STDOUT terraform:  + name = "testbed-management" 2025-03-27 00:01:20.061137 | orchestrator | 00:01:20.061 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.061172 | orchestrator | 00:01:20.061 STDOUT terraform:  + stateful = (known after apply) 2025-03-27 00:01:20.061188 | orchestrator | 00:01:20.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.061203 | orchestrator | 00:01:20.061 STDOUT terraform:  } 2025-03-27 00:01:20.061253 | orchestrator | 00:01:20.061 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-27 00:01:20.061298 | orchestrator | 00:01:20.061 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-27 00:01:20.061315 | orchestrator | 00:01:20.061 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.061358 | orchestrator | 00:01:20.061 STDOUT terraform:  + description = "node security group" 2025-03-27 00:01:20.061392 | orchestrator | 00:01:20.061 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.061408 | orchestrator | 00:01:20.061 STDOUT terraform:  + name = "testbed-node" 2025-03-27 00:01:20.061438 | orchestrator | 00:01:20.061 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.061453 | orchestrator | 00:01:20.061 STDOUT terraform:  + stateful = (known after apply) 2025-03-27 00:01:20.061476 | orchestrator | 00:01:20.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.061525 | orchestrator | 00:01:20.061 STDOUT terraform:  } 2025-03-27 00:01:20.061542 | orchestrator | 00:01:20.061 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-27 00:01:20.061558 | orchestrator | 00:01:20.061 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-27 00:01:20.061593 | orchestrator | 00:01:20.061 STDOUT terraform:  + all_tags = (known after apply) 2025-03-27 00:01:20.061609 | orchestrator | 00:01:20.061 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-27 00:01:20.061625 | orchestrator | 00:01:20.061 STDOUT terraform:  + dns_nameservers = [ 2025-03-27 00:01:20.061640 | orchestrator | 00:01:20.061 STDOUT terraform:  + "8.8.8.8", 2025-03-27 00:01:20.061656 | orchestrator | 00:01:20.061 STDOUT terraform:  + "9.9.9.9", 2025-03-27 00:01:20.061697 | orchestrator | 00:01:20.061 STDOUT terraform:  ] 2025-03-27 00:01:20.061713 | orchestrator | 00:01:20.061 STDOUT terraform:  + enable_dhcp = true 2025-03-27 00:01:20.061737 | orchestrator | 00:01:20.061 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-27 00:01:20.061753 | orchestrator | 00:01:20.061 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.061785 | orchestrator | 00:01:20.061 STDOUT terraform:  + ip_version = 4 2025-03-27 00:01:20.061801 | orchestrator | 00:01:20.061 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-27 00:01:20.061842 | orchestrator | 00:01:20.061 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-27 00:01:20.061859 | orchestrator | 00:01:20.061 STDOUT terraform:  + name = "subnet-testbed-ma 2025-03-27 00:01:20.061891 | orchestrator | 00:01:20.061 STDOUT terraform: nagement" 2025-03-27 00:01:20.061907 | orchestrator | 00:01:20.061 STDOUT terraform:  + network_id = (known after apply) 2025-03-27 00:01:20.061947 | orchestrator | 00:01:20.061 STDOUT terraform:  + no_gateway = false 2025-03-27 00:01:20.061964 | orchestrator | 00:01:20.061 STDOUT terraform:  + region = (known after apply) 2025-03-27 00:01:20.061977 | orchestrator | 00:01:20.061 STDOUT terraform:  + service_types = (known after apply) 2025-03-27 00:01:20.061993 | orchestrator | 00:01:20.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-27 00:01:20.062044 | orchestrator | 00:01:20.061 STDOUT terraform:  + allocation_pool { 2025-03-27 00:01:20.062063 | orchestrator | 00:01:20.061 STDOUT terraform:  + end = "192.168.31.250" 2025-03-27 00:01:20.062076 | orchestrator | 00:01:20.062 STDOUT terraform:  + start = "192.168.31.200" 2025-03-27 00:01:20.062089 | orchestrator | 00:01:20.062 STDOUT terraform:  } 2025-03-27 00:01:20.062104 | orchestrator | 00:01:20.062 STDOUT terraform:  } 2025-03-27 00:01:20.062117 | orchestrator | 00:01:20.062 STDOUT terraform:  # terraform_data.image will be created 2025-03-27 00:01:20.062132 | orchestrator | 00:01:20.062 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-27 00:01:20.062145 | orchestrator | 00:01:20.062 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.062168 | orchestrator | 00:01:20.062 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-27 00:01:20.062181 | orchestrator | 00:01:20.062 STDOUT terraform:  + output = (known after apply) 2025-03-27 00:01:20.062196 | orchestrator | 00:01:20.062 STDOUT terraform:  } 2025-03-27 00:01:20.062230 | orchestrator | 00:01:20.062 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-27 00:01:20.062247 | orchestrator | 00:01:20.062 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-27 00:01:20.062260 | orchestrator | 00:01:20.062 STDOUT terraform:  + id = (known after apply) 2025-03-27 00:01:20.062275 | orchestrator | 00:01:20.062 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-27 00:01:20.062288 | orchestrator | 00:01:20.062 STDOUT terraform:  + output = (known after apply) 2025-03-27 00:01:20.062303 | orchestrator | 00:01:20.062 STDOUT terraform:  } 2025-03-27 00:01:20.062318 | orchestrator | 00:01:20.062 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-27 00:01:20.062351 | orchestrator | 00:01:20.062 STDOUT terraform: Changes to Outputs: 2025-03-27 00:01:20.062367 | orchestrator | 00:01:20.062 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-27 00:01:20.235759 | orchestrator | 00:01:20.062 STDOUT terraform:  + private_key = (sensitive value) 2025-03-27 00:01:20.235893 | orchestrator | 00:01:20.235 STDOUT terraform: terraform_data.image: Creating... 2025-03-27 00:01:20.236829 | orchestrator | 00:01:20.235 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-27 00:01:20.236906 | orchestrator | 00:01:20.236 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=162dd8af-6416-97b6-8e36-446da24c2ff0] 2025-03-27 00:01:20.245302 | orchestrator | 00:01:20.236 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=0ffcfb0d-7b1b-5033-93ea-695924aeb3e7] 2025-03-27 00:01:20.245340 | orchestrator | 00:01:20.245 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-27 00:01:20.249039 | orchestrator | 00:01:20.248 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-27 00:01:20.253876 | orchestrator | 00:01:20.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-27 00:01:20.256280 | orchestrator | 00:01:20.256 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-27 00:01:20.256566 | orchestrator | 00:01:20.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-27 00:01:20.256579 | orchestrator | 00:01:20.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-27 00:01:20.257350 | orchestrator | 00:01:20.257 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-27 00:01:20.258355 | orchestrator | 00:01:20.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-27 00:01:20.258898 | orchestrator | 00:01:20.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-27 00:01:20.258919 | orchestrator | 00:01:20.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-27 00:01:20.732107 | orchestrator | 00:01:20.731 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-27 00:01:20.737446 | orchestrator | 00:01:20.731 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-27 00:01:20.737531 | orchestrator | 00:01:20.737 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-27 00:01:20.739379 | orchestrator | 00:01:20.739 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-27 00:01:21.005698 | orchestrator | 00:01:21.005 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-03-27 00:01:21.014071 | orchestrator | 00:01:21.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-27 00:01:26.108483 | orchestrator | 00:01:26.108 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=4c2a9758-fc59-443b-9b18-adab3065df35] 2025-03-27 00:01:26.117212 | orchestrator | 00:01:26.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-27 00:01:30.256040 | orchestrator | 00:01:30.255 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-27 00:01:30.256161 | orchestrator | 00:01:30.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-27 00:01:30.258237 | orchestrator | 00:01:30.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-27 00:01:30.259490 | orchestrator | 00:01:30.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-27 00:01:30.259667 | orchestrator | 00:01:30.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-27 00:01:30.259823 | orchestrator | 00:01:30.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-27 00:01:30.738503 | orchestrator | 00:01:30.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-27 00:01:30.740576 | orchestrator | 00:01:30.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-27 00:01:30.848291 | orchestrator | 00:01:30.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=50b1bf4c-79f1-4c85-95b4-05ba7fb61d40] 2025-03-27 00:01:30.859552 | orchestrator | 00:01:30.859 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-27 00:01:30.873701 | orchestrator | 00:01:30.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=c06239b1-1e23-4e3e-9542-3c7768e76fd7] 2025-03-27 00:01:30.884132 | orchestrator | 00:01:30.884 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-27 00:01:30.899598 | orchestrator | 00:01:30.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=1a89a9ff-44c1-4404-a46c-604e790c64d7] 2025-03-27 00:01:30.906722 | orchestrator | 00:01:30.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-27 00:01:30.910186 | orchestrator | 00:01:30.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=874d53e3-fb17-4b5b-8e0b-b33da9e1cc23] 2025-03-27 00:01:30.921761 | orchestrator | 00:01:30.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-27 00:01:30.922173 | orchestrator | 00:01:30.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=f2bb18ed-1663-4732-9ace-7a8cbf1e5186] 2025-03-27 00:01:30.927577 | orchestrator | 00:01:30.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-27 00:01:30.928994 | orchestrator | 00:01:30.928 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=0b86602b-3b4a-4669-b84e-8d0be08a4eb8] 2025-03-27 00:01:30.934473 | orchestrator | 00:01:30.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-27 00:01:30.978095 | orchestrator | 00:01:30.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=3ba6755c-983a-4f3d-8d53-7abda8c22d5d] 2025-03-27 00:01:30.985995 | orchestrator | 00:01:30.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-27 00:01:30.992882 | orchestrator | 00:01:30.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9] 2025-03-27 00:01:30.997821 | orchestrator | 00:01:30.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-27 00:01:31.014506 | orchestrator | 00:01:31.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-27 00:01:31.186006 | orchestrator | 00:01:31.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=61228255-bfc1-4c3b-9b0a-267eeef01c9c] 2025-03-27 00:01:31.201333 | orchestrator | 00:01:31.201 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-27 00:01:36.120184 | orchestrator | 00:01:36.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-27 00:01:36.317971 | orchestrator | 00:01:36.317 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=5498cf3d-971d-4d04-a26e-caa954b0ff0a] 2025-03-27 00:01:36.325344 | orchestrator | 00:01:36.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-27 00:01:40.860527 | orchestrator | 00:01:40.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-27 00:01:40.885705 | orchestrator | 00:01:40.885 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-27 00:01:40.907899 | orchestrator | 00:01:40.907 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-27 00:01:40.922207 | orchestrator | 00:01:40.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-27 00:01:40.928308 | orchestrator | 00:01:40.928 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-27 00:01:40.935606 | orchestrator | 00:01:40.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-27 00:01:40.986900 | orchestrator | 00:01:40.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-27 00:01:40.998207 | orchestrator | 00:01:40.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-27 00:01:41.136971 | orchestrator | 00:01:41.136 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=a6b08226-ae04-4ebb-8f92-51d42c32f5ac] 2025-03-27 00:01:41.150282 | orchestrator | 00:01:41.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-27 00:01:41.178948 | orchestrator | 00:01:41.178 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=a8735590-8c0d-455a-9e36-1ed693cbdd10] 2025-03-27 00:01:41.187999 | orchestrator | 00:01:41.187 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-27 00:01:41.202457 | orchestrator | 00:01:41.202 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-27 00:01:41.231767 | orchestrator | 00:01:41.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=29ec91c5-8d97-4cfd-bce6-384323cd2541] 2025-03-27 00:01:41.244226 | orchestrator | 00:01:41.244 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-27 00:01:41.252648 | orchestrator | 00:01:41.252 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=ae4e80391d38b270b2dcd6d78073de941f675a9c] 2025-03-27 00:01:41.263996 | orchestrator | 00:01:41.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-27 00:01:41.271908 | orchestrator | 00:01:41.271 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=bfc1b1dd-9bfd-4d32-b01b-91720163ebc8] 2025-03-27 00:01:41.278703 | orchestrator | 00:01:41.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-27 00:01:41.305739 | orchestrator | 00:01:41.305 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=dbd72eb5-415c-46b6-800c-c9a4152e0b1d] 2025-03-27 00:01:41.313201 | orchestrator | 00:01:41.313 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-27 00:01:41.330888 | orchestrator | 00:01:41.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=77112039-dad3-47d6-9314-c2213ca1fc67] 2025-03-27 00:01:41.342964 | orchestrator | 00:01:41.342 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-27 00:01:41.347395 | orchestrator | 00:01:41.347 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=2b24df06d88447bed3fc032cc4da2f0bf4bb03fa] 2025-03-27 00:01:41.357324 | orchestrator | 00:01:41.357 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-27 00:01:41.364372 | orchestrator | 00:01:41.364 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=c304c21c-7b61-43fc-89e5-88e0ceb08200] 2025-03-27 00:01:41.406551 | orchestrator | 00:01:41.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=3b62db4a-d9c9-4dee-909c-fb2dda9345a8] 2025-03-27 00:01:41.583760 | orchestrator | 00:01:41.583 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=78ef62e1-d898-4677-a373-5923280415bd] 2025-03-27 00:01:46.326692 | orchestrator | 00:01:46.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-27 00:01:46.690878 | orchestrator | 00:01:46.690 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=e3cc3972-7e4e-4695-9af3-1d8e6eae8a85] 2025-03-27 00:01:47.185308 | orchestrator | 00:01:47.184 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=36f9f3d4-7bf5-4da9-92ce-b006785e6f24] 2025-03-27 00:01:47.194747 | orchestrator | 00:01:47.194 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-27 00:01:51.151414 | orchestrator | 00:01:51.151 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-27 00:01:51.188464 | orchestrator | 00:01:51.188 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-27 00:01:51.265031 | orchestrator | 00:01:51.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-27 00:01:51.279277 | orchestrator | 00:01:51.279 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-27 00:01:51.314670 | orchestrator | 00:01:51.314 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-27 00:01:51.563803 | orchestrator | 00:01:51.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=ac5892bc-50dc-4a75-a426-a457b05ebd21] 2025-03-27 00:01:51.585467 | orchestrator | 00:01:51.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=18887533-b38f-4c9f-bae8-d4f30e6c3682] 2025-03-27 00:01:51.688769 | orchestrator | 00:01:51.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5] 2025-03-27 00:01:51.733075 | orchestrator | 00:01:51.732 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=5542f5ea-ae93-4dfe-9922-9cc923bfb807] 2025-03-27 00:01:51.760749 | orchestrator | 00:01:51.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=80403e93-bd3e-4884-b247-e0291e0a6666] 2025-03-27 00:01:53.955002 | orchestrator | 00:01:53.954 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=15058221-6d00-41d3-b9f1-4860f270c1ca] 2025-03-27 00:01:53.961174 | orchestrator | 00:01:53.960 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-27 00:01:53.964297 | orchestrator | 00:01:53.964 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-27 00:01:53.964788 | orchestrator | 00:01:53.964 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-27 00:01:54.120536 | orchestrator | 00:01:54.120 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=59fa97a5-d7e6-42b2-8837-e6912b51f821] 2025-03-27 00:01:54.129102 | orchestrator | 00:01:54.128 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-27 00:01:54.129601 | orchestrator | 00:01:54.129 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-27 00:01:54.129715 | orchestrator | 00:01:54.129 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-27 00:01:54.132713 | orchestrator | 00:01:54.132 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-27 00:01:54.133094 | orchestrator | 00:01:54.132 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-27 00:01:54.134823 | orchestrator | 00:01:54.134 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-27 00:01:54.184889 | orchestrator | 00:01:54.184 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c02e704e-5032-41e1-8737-47ffaf7f33ba] 2025-03-27 00:01:54.191703 | orchestrator | 00:01:54.191 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-27 00:01:54.192937 | orchestrator | 00:01:54.192 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-27 00:01:54.195515 | orchestrator | 00:01:54.195 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-27 00:01:54.314325 | orchestrator | 00:01:54.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=09c23f3d-6099-4851-964d-144961504ada] 2025-03-27 00:01:54.320518 | orchestrator | 00:01:54.320 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-27 00:01:54.371522 | orchestrator | 00:01:54.371 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=c74aa804-1916-423e-b4fd-2e174bedd8ed] 2025-03-27 00:01:54.387714 | orchestrator | 00:01:54.387 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-27 00:01:54.493239 | orchestrator | 00:01:54.492 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=8e48965f-a535-4916-b3e5-c9d4e4d998b6] 2025-03-27 00:01:54.507894 | orchestrator | 00:01:54.507 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-27 00:01:54.583077 | orchestrator | 00:01:54.582 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=e3931426-de17-4cdb-b853-f69f3ee8b434] 2025-03-27 00:01:54.598288 | orchestrator | 00:01:54.598 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-27 00:01:54.673737 | orchestrator | 00:01:54.673 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=0e5df119-d683-470d-b165-cba665cb0f7b] 2025-03-27 00:01:54.688067 | orchestrator | 00:01:54.687 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-27 00:01:54.762925 | orchestrator | 00:01:54.757 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6f7f2905-3fef-40ef-9303-8f960bb679f9] 2025-03-27 00:01:54.773922 | orchestrator | 00:01:54.773 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-27 00:01:54.845830 | orchestrator | 00:01:54.845 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f8b531b3-7694-4e29-8c62-3dfd19ce5919] 2025-03-27 00:01:54.859277 | orchestrator | 00:01:54.859 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-27 00:01:54.924114 | orchestrator | 00:01:54.923 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=7a07d564-65a7-4d12-8858-f2460e860a64] 2025-03-27 00:01:55.000586 | orchestrator | 00:01:55.000 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=92832737-8b24-48e9-8900-e70b61c93994] 2025-03-27 00:01:59.853848 | orchestrator | 00:01:59.853 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=4d25ef40-c7fe-49a0-a02b-afc9b47184e3] 2025-03-27 00:02:00.137257 | orchestrator | 00:02:00.136 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=0883e32d-801d-4036-b2c3-58b1865d5393] 2025-03-27 00:02:00.394180 | orchestrator | 00:02:00.393 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=35dd892d-a4da-4e5e-a278-7af9ae6f070a] 2025-03-27 00:02:00.461218 | orchestrator | 00:02:00.460 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=98a95194-b1fa-4db4-86fd-28b6004e33ff] 2025-03-27 00:02:00.563706 | orchestrator | 00:02:00.563 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=9ef9c157-deef-478b-8ca7-28cebc65d142] 2025-03-27 00:02:00.568592 | orchestrator | 00:02:00.568 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-27 00:02:01.017607 | orchestrator | 00:02:01.017 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=70b86d6b-2637-4896-bb23-5958abc99a68] 2025-03-27 00:02:01.386393 | orchestrator | 00:02:01.385 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=9196fb83-db77-485f-9095-a25235e41813] 2025-03-27 00:02:01.603869 | orchestrator | 00:02:01.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=ad4eee81-8c4d-438a-a3db-e80d525bce8a] 2025-03-27 00:02:01.628758 | orchestrator | 00:02:01.628 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-27 00:02:01.633579 | orchestrator | 00:02:01.633 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-27 00:02:01.633804 | orchestrator | 00:02:01.633 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-27 00:02:01.639877 | orchestrator | 00:02:01.639 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-27 00:02:01.643727 | orchestrator | 00:02:01.643 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-27 00:02:01.647193 | orchestrator | 00:02:01.647 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-27 00:02:06.811456 | orchestrator | 00:02:06.811 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=ef2d7702-2780-4c60-a780-db9204a4f647] 2025-03-27 00:02:06.821048 | orchestrator | 00:02:06.820 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-27 00:02:06.829535 | orchestrator | 00:02:06.829 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-27 00:02:06.834326 | orchestrator | 00:02:06.829 STDOUT terraform: local_file.inventory: Creating... 2025-03-27 00:02:06.834388 | orchestrator | 00:02:06.834 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=e5fbc46b7ec5b2be4ee429bd6add8c5c59bc609c] 2025-03-27 00:02:06.837663 | orchestrator | 00:02:06.837 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=74982c22a4d088dd795deb7c6d87743937a8cb9a] 2025-03-27 00:02:08.165055 | orchestrator | 00:02:08.164 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ef2d7702-2780-4c60-a780-db9204a4f647] 2025-03-27 00:02:11.631340 | orchestrator | 00:02:11.630 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-27 00:02:11.634470 | orchestrator | 00:02:11.634 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-27 00:02:11.634604 | orchestrator | 00:02:11.634 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-27 00:02:11.644823 | orchestrator | 00:02:11.644 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-27 00:02:11.644915 | orchestrator | 00:02:11.644 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-27 00:02:11.650903 | orchestrator | 00:02:11.650 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-27 00:02:21.632220 | orchestrator | 00:02:21.631 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-27 00:02:21.634730 | orchestrator | 00:02:21.634 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-27 00:02:21.634813 | orchestrator | 00:02:21.634 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-27 00:02:21.644985 | orchestrator | 00:02:21.644 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-27 00:02:21.645041 | orchestrator | 00:02:21.644 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-27 00:02:21.651168 | orchestrator | 00:02:21.650 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-27 00:02:21.965807 | orchestrator | 00:02:21.965 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=3ea74fff-c6e1-4897-9cee-f0fa190a6c3d] 2025-03-27 00:02:22.099501 | orchestrator | 00:02:22.099 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=d40f0f46-d382-4584-bd1a-add7949dae36] 2025-03-27 00:02:22.715795 | orchestrator | 00:02:22.715 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=19dd36bd-1803-498f-98ea-d30edc603a19] 2025-03-27 00:02:31.635164 | orchestrator | 00:02:31.634 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-03-27 00:02:31.645297 | orchestrator | 00:02:31.645 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-03-27 00:02:31.645687 | orchestrator | 00:02:31.645 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-03-27 00:02:32.890841 | orchestrator | 00:02:32.890 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=c6449cdc-4150-402b-9ec3-bcea0c22f220] 2025-03-27 00:02:32.905840 | orchestrator | 00:02:32.905 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=c9b485a6-4264-4d0b-a67f-a7f4d7fe712a] 2025-03-27 00:02:32.989006 | orchestrator | 00:02:32.988 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=c11642bb-c64b-42da-898f-a4873f4c3e1a] 2025-03-27 00:02:33.007275 | orchestrator | 00:02:33.007 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-27 00:02:33.013581 | orchestrator | 00:02:33.013 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-27 00:02:33.015051 | orchestrator | 00:02:33.014 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-27 00:02:33.022892 | orchestrator | 00:02:33.022 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-27 00:02:33.023397 | orchestrator | 00:02:33.023 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7930084248590282238] 2025-03-27 00:02:33.023646 | orchestrator | 00:02:33.023 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-27 00:02:33.024242 | orchestrator | 00:02:33.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-27 00:02:33.026296 | orchestrator | 00:02:33.026 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-27 00:02:33.030108 | orchestrator | 00:02:33.029 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-27 00:02:33.046735 | orchestrator | 00:02:33.046 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-27 00:02:33.053159 | orchestrator | 00:02:33.053 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-27 00:02:33.057299 | orchestrator | 00:02:33.057 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-27 00:02:38.447731 | orchestrator | 00:02:38.444 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=c9b485a6-4264-4d0b-a67f-a7f4d7fe712a/c304c21c-7b61-43fc-89e5-88e0ceb08200] 2025-03-27 00:02:38.449874 | orchestrator | 00:02:38.449 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=3ea74fff-c6e1-4897-9cee-f0fa190a6c3d/874d53e3-fb17-4b5b-8e0b-b33da9e1cc23] 2025-03-27 00:02:38.455074 | orchestrator | 00:02:38.454 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-27 00:02:38.458206 | orchestrator | 00:02:38.458 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-27 00:02:38.468537 | orchestrator | 00:02:38.468 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=c11642bb-c64b-42da-898f-a4873f4c3e1a/0b86602b-3b4a-4669-b84e-8d0be08a4eb8] 2025-03-27 00:02:38.468608 | orchestrator | 00:02:38.468 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=c6449cdc-4150-402b-9ec3-bcea0c22f220/f2bb18ed-1663-4732-9ace-7a8cbf1e5186] 2025-03-27 00:02:38.477600 | orchestrator | 00:02:38.477 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-27 00:02:38.478594 | orchestrator | 00:02:38.478 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-27 00:02:38.482471 | orchestrator | 00:02:38.482 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=d40f0f46-d382-4584-bd1a-add7949dae36/a8735590-8c0d-455a-9e36-1ed693cbdd10] 2025-03-27 00:02:38.491550 | orchestrator | 00:02:38.491 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-27 00:02:38.492999 | orchestrator | 00:02:38.492 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=c11642bb-c64b-42da-898f-a4873f4c3e1a/3ba6755c-983a-4f3d-8d53-7abda8c22d5d] 2025-03-27 00:02:38.497056 | orchestrator | 00:02:38.496 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=3ea74fff-c6e1-4897-9cee-f0fa190a6c3d/1a89a9ff-44c1-4404-a46c-604e790c64d7] 2025-03-27 00:02:38.503246 | orchestrator | 00:02:38.502 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=19dd36bd-1803-498f-98ea-d30edc603a19/77112039-dad3-47d6-9314-c2213ca1fc67] 2025-03-27 00:02:38.508316 | orchestrator | 00:02:38.508 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=c9b485a6-4264-4d0b-a67f-a7f4d7fe712a/c06239b1-1e23-4e3e-9542-3c7768e76fd7] 2025-03-27 00:02:38.509538 | orchestrator | 00:02:38.509 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-27 00:02:38.513589 | orchestrator | 00:02:38.513 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-27 00:02:38.519576 | orchestrator | 00:02:38.519 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-27 00:02:38.528033 | orchestrator | 00:02:38.527 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-27 00:02:38.555315 | orchestrator | 00:02:38.554 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=d40f0f46-d382-4584-bd1a-add7949dae36/5498cf3d-971d-4d04-a26e-caa954b0ff0a] 2025-03-27 00:02:43.776793 | orchestrator | 00:02:43.776 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=19dd36bd-1803-498f-98ea-d30edc603a19/61228255-bfc1-4c3b-9b0a-267eeef01c9c] 2025-03-27 00:02:43.788555 | orchestrator | 00:02:43.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=c6449cdc-4150-402b-9ec3-bcea0c22f220/bfc1b1dd-9bfd-4d32-b01b-91720163ebc8] 2025-03-27 00:02:43.861961 | orchestrator | 00:02:43.861 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=d40f0f46-d382-4584-bd1a-add7949dae36/3b62db4a-d9c9-4dee-909c-fb2dda9345a8] 2025-03-27 00:02:43.863893 | orchestrator | 00:02:43.863 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=19dd36bd-1803-498f-98ea-d30edc603a19/29ec91c5-8d97-4cfd-bce6-384323cd2541] 2025-03-27 00:02:43.869408 | orchestrator | 00:02:43.869 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=c11642bb-c64b-42da-898f-a4873f4c3e1a/a6b08226-ae04-4ebb-8f92-51d42c32f5ac] 2025-03-27 00:02:43.887258 | orchestrator | 00:02:43.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=3ea74fff-c6e1-4897-9cee-f0fa190a6c3d/3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9] 2025-03-27 00:02:43.892716 | orchestrator | 00:02:43.892 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=c6449cdc-4150-402b-9ec3-bcea0c22f220/50b1bf4c-79f1-4c85-95b4-05ba7fb61d40] 2025-03-27 00:02:43.897063 | orchestrator | 00:02:43.896 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=c9b485a6-4264-4d0b-a67f-a7f4d7fe712a/dbd72eb5-415c-46b6-800c-c9a4152e0b1d] 2025-03-27 00:02:48.529197 | orchestrator | 00:02:48.528 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-27 00:02:58.533801 | orchestrator | 00:02:58.533 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-27 00:02:59.220244 | orchestrator | 00:02:59.219 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a103a63f-0017-4cb8-bd65-5871c104567f] 2025-03-27 00:02:59.232990 | orchestrator | 00:02:59.232 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-27 00:02:59.233052 | orchestrator | 00:02:59.232 STDOUT terraform: Outputs: 2025-03-27 00:02:59.233066 | orchestrator | 00:02:59.232 STDOUT terraform: manager_address = 2025-03-27 00:02:59.242104 | orchestrator | 00:02:59.233 STDOUT terraform: private_key = 2025-03-27 00:03:09.585046 | orchestrator | changed 2025-03-27 00:03:09.629417 | 2025-03-27 00:03:09.629552 | TASK [Fetch manager address] 2025-03-27 00:03:10.107923 | orchestrator | ok 2025-03-27 00:03:10.123149 | 2025-03-27 00:03:10.123277 | TASK [Set manager_host address] 2025-03-27 00:03:10.224554 | orchestrator | ok 2025-03-27 00:03:10.238543 | 2025-03-27 00:03:10.238777 | LOOP [Update ansible collections] 2025-03-27 00:03:11.027968 | orchestrator | changed 2025-03-27 00:03:11.727494 | orchestrator | changed 2025-03-27 00:03:11.747889 | 2025-03-27 00:03:11.748184 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-27 00:03:22.277734 | orchestrator | ok 2025-03-27 00:03:22.289869 | 2025-03-27 00:03:22.289974 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-27 00:04:22.343274 | orchestrator | ok 2025-03-27 00:04:22.356023 | 2025-03-27 00:04:22.356187 | TASK [Fetch manager ssh hostkey] 2025-03-27 00:04:23.439754 | orchestrator | Output suppressed because no_log was given 2025-03-27 00:04:23.461984 | 2025-03-27 00:04:23.462224 | TASK [Get ssh keypair from terraform environment] 2025-03-27 00:04:24.018162 | orchestrator | changed 2025-03-27 00:04:24.037657 | 2025-03-27 00:04:24.037854 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-27 00:04:24.092777 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-27 00:04:24.104738 | 2025-03-27 00:04:24.104864 | TASK [Run manager part 0] 2025-03-27 00:04:24.954207 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-27 00:04:24.995828 | orchestrator | 2025-03-27 00:04:27.310816 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-27 00:04:27.310871 | orchestrator | 2025-03-27 00:04:27.310890 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-27 00:04:27.310907 | orchestrator | ok: [testbed-manager] 2025-03-27 00:04:29.342157 | orchestrator | 2025-03-27 00:04:29.342320 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-27 00:04:29.342358 | orchestrator | 2025-03-27 00:04:29.342377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:04:29.342408 | orchestrator | ok: [testbed-manager] 2025-03-27 00:04:30.033855 | orchestrator | 2025-03-27 00:04:30.033954 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-27 00:04:30.034001 | orchestrator | ok: [testbed-manager] 2025-03-27 00:04:30.079520 | orchestrator | 2025-03-27 00:04:30.079568 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-27 00:04:30.079585 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.105236 | orchestrator | 2025-03-27 00:04:30.105259 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-27 00:04:30.105270 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.125804 | orchestrator | 2025-03-27 00:04:30.125819 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-27 00:04:30.125829 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.152399 | orchestrator | 2025-03-27 00:04:30.152459 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-27 00:04:30.152474 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.174486 | orchestrator | 2025-03-27 00:04:30.174513 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-27 00:04:30.174525 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.196296 | orchestrator | 2025-03-27 00:04:30.196321 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-27 00:04:30.196332 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:30.219510 | orchestrator | 2025-03-27 00:04:30.219526 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-27 00:04:30.219536 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:04:31.085250 | orchestrator | 2025-03-27 00:04:31.085311 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-27 00:04:31.085329 | orchestrator | changed: [testbed-manager] 2025-03-27 00:07:46.482969 | orchestrator | 2025-03-27 00:07:46.483050 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-27 00:07:46.483076 | orchestrator | changed: [testbed-manager] 2025-03-27 00:09:24.696251 | orchestrator | 2025-03-27 00:09:24.696332 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-27 00:09:24.696363 | orchestrator | changed: [testbed-manager] 2025-03-27 00:09:47.654371 | orchestrator | 2025-03-27 00:09:47.654504 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-27 00:09:47.654543 | orchestrator | changed: [testbed-manager] 2025-03-27 00:09:58.601498 | orchestrator | 2025-03-27 00:09:58.601609 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-27 00:09:58.601646 | orchestrator | changed: [testbed-manager] 2025-03-27 00:09:58.647487 | orchestrator | 2025-03-27 00:09:58.647550 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-27 00:09:58.647574 | orchestrator | ok: [testbed-manager] 2025-03-27 00:09:59.629515 | orchestrator | 2025-03-27 00:09:59.629617 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-27 00:09:59.629654 | orchestrator | ok: [testbed-manager] 2025-03-27 00:10:00.405125 | orchestrator | 2025-03-27 00:10:00.405234 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-27 00:10:00.405279 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:08.617372 | orchestrator | 2025-03-27 00:10:08.617507 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-27 00:10:08.617550 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:15.418155 | orchestrator | 2025-03-27 00:10:15.418286 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-27 00:10:15.418344 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:19.790605 | orchestrator | 2025-03-27 00:10:19.790657 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-27 00:10:19.790676 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:21.765910 | orchestrator | 2025-03-27 00:10:21.766095 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-27 00:10:21.766152 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:22.997788 | orchestrator | 2025-03-27 00:10:22.997871 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-27 00:10:22.997898 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-27 00:10:23.040942 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-27 00:10:23.041004 | orchestrator | 2025-03-27 00:10:23.041020 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-27 00:10:23.041041 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-27 00:10:27.438462 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-27 00:10:27.438509 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-27 00:10:27.438517 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-27 00:10:27.438531 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-27 00:10:28.011067 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-27 00:10:28.011163 | orchestrator | 2025-03-27 00:10:28.011183 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-27 00:10:28.011214 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:47.720901 | orchestrator | 2025-03-27 00:10:47.720971 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-27 00:10:47.720988 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-27 00:10:50.196632 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-27 00:10:50.196674 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-27 00:10:50.196680 | orchestrator | 2025-03-27 00:10:50.196688 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-27 00:10:50.196701 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-27 00:10:51.732274 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-27 00:10:51.732317 | orchestrator | 2025-03-27 00:10:51.732324 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-27 00:10:51.732330 | orchestrator | 2025-03-27 00:10:51.732336 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:10:51.732347 | orchestrator | ok: [testbed-manager] 2025-03-27 00:10:51.778263 | orchestrator | 2025-03-27 00:10:51.778309 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-27 00:10:51.778326 | orchestrator | ok: [testbed-manager] 2025-03-27 00:10:51.835922 | orchestrator | 2025-03-27 00:10:51.835969 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-27 00:10:51.835987 | orchestrator | ok: [testbed-manager] 2025-03-27 00:10:52.640139 | orchestrator | 2025-03-27 00:10:52.640188 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-27 00:10:52.640209 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:53.432753 | orchestrator | 2025-03-27 00:10:53.432806 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-27 00:10:53.432825 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:54.862614 | orchestrator | 2025-03-27 00:10:54.862722 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-27 00:10:54.862773 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-27 00:10:56.310918 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-27 00:10:56.311007 | orchestrator | 2025-03-27 00:10:56.311023 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-27 00:10:56.311050 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:58.204704 | orchestrator | 2025-03-27 00:10:58.204778 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-27 00:10:58.204803 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:10:58.801929 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-27 00:10:58.802089 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:10:58.802113 | orchestrator | 2025-03-27 00:10:58.802129 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-27 00:10:58.802164 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:58.867079 | orchestrator | 2025-03-27 00:10:58.867177 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-27 00:10:58.867209 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:10:59.758862 | orchestrator | 2025-03-27 00:10:59.758934 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-27 00:10:59.758955 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:10:59.794366 | orchestrator | changed: [testbed-manager] 2025-03-27 00:10:59.794437 | orchestrator | 2025-03-27 00:10:59.794450 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-27 00:10:59.794467 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:10:59.824555 | orchestrator | 2025-03-27 00:10:59.824597 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-27 00:10:59.824613 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:10:59.855532 | orchestrator | 2025-03-27 00:10:59.855569 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-27 00:10:59.855585 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:10:59.901737 | orchestrator | 2025-03-27 00:10:59.901779 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-27 00:10:59.901797 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:11:00.645656 | orchestrator | 2025-03-27 00:11:00.645778 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-27 00:11:00.645817 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:02.115436 | orchestrator | 2025-03-27 00:11:02.115570 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-27 00:11:02.115592 | orchestrator | 2025-03-27 00:11:02.115607 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:11:02.115641 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:03.123304 | orchestrator | 2025-03-27 00:11:03.123982 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-27 00:11:03.124029 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:03.220389 | orchestrator | 2025-03-27 00:11:03.220616 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:11:03.220641 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-27 00:11:03.220656 | orchestrator | 2025-03-27 00:11:03.355978 | orchestrator | changed 2025-03-27 00:11:03.377289 | 2025-03-27 00:11:03.377413 | TASK [Point out that the log in on the manager is now possible] 2025-03-27 00:11:03.430744 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-27 00:11:03.442692 | 2025-03-27 00:11:03.442800 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-27 00:11:03.495588 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-27 00:11:03.507343 | 2025-03-27 00:11:03.507453 | TASK [Run manager part 1 + 2] 2025-03-27 00:11:04.316903 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-27 00:11:04.367994 | orchestrator | 2025-03-27 00:11:06.978912 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-27 00:11:06.978969 | orchestrator | 2025-03-27 00:11:06.978988 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:11:06.979004 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:07.017341 | orchestrator | 2025-03-27 00:11:07.017403 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-27 00:11:07.017488 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:11:07.056316 | orchestrator | 2025-03-27 00:11:07.056365 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-27 00:11:07.056383 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:07.094728 | orchestrator | 2025-03-27 00:11:07.094778 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-27 00:11:07.094796 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:07.155436 | orchestrator | 2025-03-27 00:11:07.155488 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-27 00:11:07.155508 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:07.210032 | orchestrator | 2025-03-27 00:11:07.210086 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-27 00:11:07.210104 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:07.251147 | orchestrator | 2025-03-27 00:11:07.251190 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-27 00:11:07.251205 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-27 00:11:07.986296 | orchestrator | 2025-03-27 00:11:07.986353 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-27 00:11:07.986371 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:08.034830 | orchestrator | 2025-03-27 00:11:08.034891 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-27 00:11:08.034909 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:11:09.571483 | orchestrator | 2025-03-27 00:11:09.571539 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-27 00:11:09.571564 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:10.187602 | orchestrator | 2025-03-27 00:11:10.187651 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-27 00:11:10.187669 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:12.108335 | orchestrator | 2025-03-27 00:11:12.108424 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-27 00:11:12.108454 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:25.878732 | orchestrator | 2025-03-27 00:11:25.878836 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-27 00:11:25.878869 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:26.564888 | orchestrator | 2025-03-27 00:11:26.564939 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-27 00:11:26.564958 | orchestrator | ok: [testbed-manager] 2025-03-27 00:11:26.617376 | orchestrator | 2025-03-27 00:11:26.617451 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-27 00:11:26.617470 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:11:27.608734 | orchestrator | 2025-03-27 00:11:27.608787 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-27 00:11:27.608805 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:28.638636 | orchestrator | 2025-03-27 00:11:28.638746 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-27 00:11:28.638780 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:29.212598 | orchestrator | 2025-03-27 00:11:29.212696 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-27 00:11:29.212732 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:29.252895 | orchestrator | 2025-03-27 00:11:29.252947 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-27 00:11:29.252962 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-27 00:11:32.158984 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-27 00:11:32.159036 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-27 00:11:32.159045 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-27 00:11:32.159058 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:41.916541 | orchestrator | 2025-03-27 00:11:41.916702 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-27 00:11:41.916738 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-27 00:11:42.983492 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-27 00:11:42.983599 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-27 00:11:42.983619 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-27 00:11:42.983636 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-27 00:11:42.983650 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-27 00:11:42.983665 | orchestrator | 2025-03-27 00:11:42.983680 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-27 00:11:42.983725 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:43.024967 | orchestrator | 2025-03-27 00:11:43.025065 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-27 00:11:43.025099 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:11:46.459113 | orchestrator | 2025-03-27 00:11:46.459211 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-27 00:11:46.459245 | orchestrator | changed: [testbed-manager] 2025-03-27 00:11:46.488390 | orchestrator | 2025-03-27 00:11:46.488489 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-27 00:11:46.488518 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:13:32.956390 | orchestrator | 2025-03-27 00:13:32.956508 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-27 00:13:32.956539 | orchestrator | changed: [testbed-manager] 2025-03-27 00:13:33.947989 | orchestrator | 2025-03-27 00:13:33.948080 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-27 00:13:33.948108 | orchestrator | ok: [testbed-manager] 2025-03-27 00:13:34.037637 | orchestrator | 2025-03-27 00:13:34.037686 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:13:34.037701 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-27 00:13:34.037714 | orchestrator | 2025-03-27 00:13:34.139150 | orchestrator | changed 2025-03-27 00:13:34.158501 | 2025-03-27 00:13:34.158635 | TASK [Reboot manager] 2025-03-27 00:13:35.741888 | orchestrator | changed 2025-03-27 00:13:35.751505 | 2025-03-27 00:13:35.751621 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-27 00:13:52.436871 | orchestrator | ok 2025-03-27 00:13:52.447808 | 2025-03-27 00:13:52.447925 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-27 00:14:52.497343 | orchestrator | ok 2025-03-27 00:14:52.507058 | 2025-03-27 00:14:52.507163 | TASK [Deploy manager + bootstrap nodes] 2025-03-27 00:14:55.406734 | orchestrator | 2025-03-27 00:14:55.410889 | orchestrator | # DEPLOY MANAGER 2025-03-27 00:14:55.410957 | orchestrator | 2025-03-27 00:14:55.410975 | orchestrator | + set -e 2025-03-27 00:14:55.411020 | orchestrator | + echo 2025-03-27 00:14:55.411040 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-27 00:14:55.411057 | orchestrator | + echo 2025-03-27 00:14:55.411082 | orchestrator | + cat /opt/manager-vars.sh 2025-03-27 00:14:55.411118 | orchestrator | export NUMBER_OF_NODES=6 2025-03-27 00:14:55.412354 | orchestrator | 2025-03-27 00:14:55.412422 | orchestrator | export CEPH_VERSION=quincy 2025-03-27 00:14:55.412438 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-27 00:14:55.412488 | orchestrator | export MANAGER_VERSION=8.1.0 2025-03-27 00:14:55.412503 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-27 00:14:55.412518 | orchestrator | 2025-03-27 00:14:55.412556 | orchestrator | export ARA=false 2025-03-27 00:14:55.412571 | orchestrator | export TEMPEST=false 2025-03-27 00:14:55.412586 | orchestrator | export IS_ZUUL=true 2025-03-27 00:14:55.412600 | orchestrator | 2025-03-27 00:14:55.412634 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:14:55.412651 | orchestrator | export EXTERNAL_API=false 2025-03-27 00:14:55.412665 | orchestrator | 2025-03-27 00:14:55.412679 | orchestrator | export IMAGE_USER=ubuntu 2025-03-27 00:14:55.412694 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-27 00:14:55.412709 | orchestrator | 2025-03-27 00:14:55.412722 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-27 00:14:55.412736 | orchestrator | 2025-03-27 00:14:55.412750 | orchestrator | + echo 2025-03-27 00:14:55.412764 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-27 00:14:55.412784 | orchestrator | ++ export INTERACTIVE=false 2025-03-27 00:14:55.474645 | orchestrator | ++ INTERACTIVE=false 2025-03-27 00:14:55.474678 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-27 00:14:55.474701 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-27 00:14:55.474716 | orchestrator | + source /opt/manager-vars.sh 2025-03-27 00:14:55.474730 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-27 00:14:55.474743 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-27 00:14:55.474757 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-27 00:14:55.474771 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-27 00:14:55.474785 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-27 00:14:55.474800 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-27 00:14:55.474821 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-27 00:14:55.474835 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-27 00:14:55.474849 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-27 00:14:55.474863 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-27 00:14:55.474877 | orchestrator | ++ export ARA=false 2025-03-27 00:14:55.474891 | orchestrator | ++ ARA=false 2025-03-27 00:14:55.474905 | orchestrator | ++ export TEMPEST=false 2025-03-27 00:14:55.474919 | orchestrator | ++ TEMPEST=false 2025-03-27 00:14:55.474932 | orchestrator | ++ export IS_ZUUL=true 2025-03-27 00:14:55.474946 | orchestrator | ++ IS_ZUUL=true 2025-03-27 00:14:55.474960 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:14:55.474974 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:14:55.474995 | orchestrator | ++ export EXTERNAL_API=false 2025-03-27 00:14:55.475009 | orchestrator | ++ EXTERNAL_API=false 2025-03-27 00:14:55.475023 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-27 00:14:55.475037 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-27 00:14:55.475051 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-27 00:14:55.475064 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-27 00:14:55.475082 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-27 00:14:55.475096 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-27 00:14:55.475110 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-27 00:14:55.475138 | orchestrator | + docker version 2025-03-27 00:14:55.770994 | orchestrator | Client: Docker Engine - Community 2025-03-27 00:14:55.772707 | orchestrator | Version: 26.1.4 2025-03-27 00:14:55.772748 | orchestrator | API version: 1.45 2025-03-27 00:14:55.772764 | orchestrator | Go version: go1.21.11 2025-03-27 00:14:55.772778 | orchestrator | Git commit: 5650f9b 2025-03-27 00:14:55.772792 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-27 00:14:55.772807 | orchestrator | OS/Arch: linux/amd64 2025-03-27 00:14:55.772821 | orchestrator | Context: default 2025-03-27 00:14:55.772835 | orchestrator | 2025-03-27 00:14:55.772850 | orchestrator | Server: Docker Engine - Community 2025-03-27 00:14:55.772864 | orchestrator | Engine: 2025-03-27 00:14:55.772878 | orchestrator | Version: 26.1.4 2025-03-27 00:14:55.772892 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-03-27 00:14:55.772906 | orchestrator | Go version: go1.21.11 2025-03-27 00:14:55.772930 | orchestrator | Git commit: de5c9cf 2025-03-27 00:14:55.772970 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-27 00:14:55.772985 | orchestrator | OS/Arch: linux/amd64 2025-03-27 00:14:55.772999 | orchestrator | Experimental: false 2025-03-27 00:14:55.773012 | orchestrator | containerd: 2025-03-27 00:14:55.773026 | orchestrator | Version: 1.7.26 2025-03-27 00:14:55.773040 | orchestrator | GitCommit: 753481ec61c7c8955a23d6ff7bc8e4daed455734 2025-03-27 00:14:55.773053 | orchestrator | runc: 2025-03-27 00:14:55.773067 | orchestrator | Version: 1.2.5 2025-03-27 00:14:55.773082 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-03-27 00:14:55.773096 | orchestrator | docker-init: 2025-03-27 00:14:55.773109 | orchestrator | Version: 0.19.0 2025-03-27 00:14:55.773123 | orchestrator | GitCommit: de40ad0 2025-03-27 00:14:55.773145 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-27 00:14:55.780972 | orchestrator | + set -e 2025-03-27 00:14:55.781140 | orchestrator | + source /opt/manager-vars.sh 2025-03-27 00:14:55.781172 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-27 00:14:55.781187 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-27 00:14:55.781201 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-27 00:14:55.781215 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-27 00:14:55.781229 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-27 00:14:55.781242 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-27 00:14:55.781256 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-27 00:14:55.781270 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-27 00:14:55.781284 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-27 00:14:55.781298 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-27 00:14:55.781311 | orchestrator | ++ export ARA=false 2025-03-27 00:14:55.781325 | orchestrator | ++ ARA=false 2025-03-27 00:14:55.781339 | orchestrator | ++ export TEMPEST=false 2025-03-27 00:14:55.781353 | orchestrator | ++ TEMPEST=false 2025-03-27 00:14:55.781366 | orchestrator | ++ export IS_ZUUL=true 2025-03-27 00:14:55.781419 | orchestrator | ++ IS_ZUUL=true 2025-03-27 00:14:55.781434 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:14:55.781448 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:14:55.781462 | orchestrator | ++ export EXTERNAL_API=false 2025-03-27 00:14:55.781476 | orchestrator | ++ EXTERNAL_API=false 2025-03-27 00:14:55.781489 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-27 00:14:55.781503 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-27 00:14:55.781523 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-27 00:14:55.781536 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-27 00:14:55.781550 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-27 00:14:55.781564 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-27 00:14:55.781578 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-27 00:14:55.781592 | orchestrator | ++ export INTERACTIVE=false 2025-03-27 00:14:55.781606 | orchestrator | ++ INTERACTIVE=false 2025-03-27 00:14:55.781619 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-27 00:14:55.781633 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-27 00:14:55.781652 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-27 00:14:55.787060 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-03-27 00:14:55.787094 | orchestrator | + set -e 2025-03-27 00:14:55.795857 | orchestrator | + VERSION=8.1.0 2025-03-27 00:14:55.795884 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-03-27 00:14:55.795912 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-27 00:14:55.801454 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-27 00:14:55.801489 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-27 00:14:55.805632 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-03-27 00:14:55.815129 | orchestrator | /opt/configuration ~ 2025-03-27 00:14:55.819076 | orchestrator | + set -e 2025-03-27 00:14:55.819101 | orchestrator | + pushd /opt/configuration 2025-03-27 00:14:55.819116 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-27 00:14:55.819142 | orchestrator | + source /opt/venv/bin/activate 2025-03-27 00:14:55.820404 | orchestrator | ++ deactivate nondestructive 2025-03-27 00:14:55.820426 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:14:55.820444 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:14:55.820623 | orchestrator | ++ hash -r 2025-03-27 00:14:55.820647 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:14:55.820832 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-27 00:14:55.820852 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-27 00:14:55.820867 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-27 00:14:55.820921 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-27 00:14:55.820950 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-27 00:14:55.820965 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-27 00:14:55.820980 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-27 00:14:55.820994 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:14:55.821013 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:14:55.821109 | orchestrator | ++ export PATH 2025-03-27 00:14:55.821129 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:14:55.821332 | orchestrator | ++ '[' -z '' ']' 2025-03-27 00:14:55.821454 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-27 00:14:55.821478 | orchestrator | ++ PS1='(venv) ' 2025-03-27 00:14:55.821503 | orchestrator | ++ export PS1 2025-03-27 00:14:55.821517 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-27 00:14:55.821531 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-27 00:14:55.821545 | orchestrator | ++ hash -r 2025-03-27 00:14:55.821563 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-03-27 00:14:57.309817 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-03-27 00:14:57.310679 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-03-27 00:14:57.312043 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-03-27 00:14:57.313727 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-03-27 00:14:57.315182 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-03-27 00:14:57.325335 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-03-27 00:14:57.327145 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-03-27 00:14:57.328411 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-03-27 00:14:57.329836 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-03-27 00:14:57.365095 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-03-27 00:14:57.366924 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-03-27 00:14:57.368610 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-03-27 00:14:57.370299 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-03-27 00:14:57.374417 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-03-27 00:14:57.618184 | orchestrator | ++ which gilt 2025-03-27 00:14:57.621011 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-03-27 00:14:57.917814 | orchestrator | + /opt/venv/bin/gilt overlay 2025-03-27 00:14:57.917933 | orchestrator | osism.cfg-generics: 2025-03-27 00:14:59.580776 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-03-27 00:14:59.580917 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-03-27 00:14:59.581139 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-03-27 00:14:59.581170 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-03-27 00:14:59.581320 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-03-27 00:15:00.592588 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-03-27 00:15:00.603892 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-03-27 00:15:01.070389 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-03-27 00:15:01.136688 | orchestrator | ~ 2025-03-27 00:15:01.138821 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-27 00:15:01.138848 | orchestrator | + deactivate 2025-03-27 00:15:01.138882 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-27 00:15:01.138897 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:15:01.138910 | orchestrator | + export PATH 2025-03-27 00:15:01.138923 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-27 00:15:01.138935 | orchestrator | + '[' -n '' ']' 2025-03-27 00:15:01.138947 | orchestrator | + hash -r 2025-03-27 00:15:01.138960 | orchestrator | + '[' -n '' ']' 2025-03-27 00:15:01.138972 | orchestrator | + unset VIRTUAL_ENV 2025-03-27 00:15:01.138984 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-27 00:15:01.138997 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-27 00:15:01.139012 | orchestrator | + unset -f deactivate 2025-03-27 00:15:01.139024 | orchestrator | + popd 2025-03-27 00:15:01.139042 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-27 00:15:01.139743 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-27 00:15:01.139769 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-27 00:15:01.195115 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-27 00:15:01.238190 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-27 00:15:01.238320 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-27 00:15:01.238354 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-27 00:15:01.238858 | orchestrator | + source /opt/venv/bin/activate 2025-03-27 00:15:01.238977 | orchestrator | ++ deactivate nondestructive 2025-03-27 00:15:01.238999 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:15:01.239034 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:15:01.239050 | orchestrator | ++ hash -r 2025-03-27 00:15:01.239064 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:15:01.239078 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-27 00:15:01.239093 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-27 00:15:01.239107 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-27 00:15:01.239122 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-27 00:15:01.239136 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-27 00:15:01.239150 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-27 00:15:01.239165 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-27 00:15:01.239197 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:15:02.793960 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:15:02.794111 | orchestrator | ++ export PATH 2025-03-27 00:15:02.794133 | orchestrator | ++ '[' -n '' ']' 2025-03-27 00:15:02.794148 | orchestrator | ++ '[' -z '' ']' 2025-03-27 00:15:02.794163 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-27 00:15:02.794179 | orchestrator | ++ PS1='(venv) ' 2025-03-27 00:15:02.794193 | orchestrator | ++ export PS1 2025-03-27 00:15:02.794207 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-27 00:15:02.794222 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-27 00:15:02.794239 | orchestrator | ++ hash -r 2025-03-27 00:15:02.794254 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-27 00:15:02.794286 | orchestrator | 2025-03-27 00:15:03.449250 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-27 00:15:03.449348 | orchestrator | 2025-03-27 00:15:03.449365 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-27 00:15:03.449448 | orchestrator | ok: [testbed-manager] 2025-03-27 00:15:04.544597 | orchestrator | 2025-03-27 00:15:04.544709 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-27 00:15:04.544743 | orchestrator | changed: [testbed-manager] 2025-03-27 00:15:07.248587 | orchestrator | 2025-03-27 00:15:07.248709 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-27 00:15:07.248728 | orchestrator | 2025-03-27 00:15:07.248742 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:15:07.248775 | orchestrator | ok: [testbed-manager] 2025-03-27 00:15:13.277669 | orchestrator | 2025-03-27 00:15:13.277799 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-27 00:15:13.277865 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-27 00:16:40.101895 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-03-27 00:16:40.102112 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-03-27 00:16:40.102137 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-03-27 00:16:40.102153 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-03-27 00:16:40.102169 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-03-27 00:16:40.102184 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-03-27 00:16:40.102198 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-03-27 00:16:40.102212 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-03-27 00:16:40.102234 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-03-27 00:16:40.102249 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-03-27 00:16:40.102264 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-03-27 00:16:40.102278 | orchestrator | 2025-03-27 00:16:40.102292 | orchestrator | TASK [Check status] ************************************************************ 2025-03-27 00:16:40.102369 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-27 00:16:40.160696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-27 00:16:40.160805 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-03-27 00:16:40.160825 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-03-27 00:16:40.160840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-03-27 00:16:40.160859 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j513125117438.1587', 'results_file': '/home/dragon/.ansible_async/j513125117438.1587', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.160920 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j223008513687.1612', 'results_file': '/home/dragon/.ansible_async/j223008513687.1612', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.160937 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-27 00:16:40.160952 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-27 00:16:40.160967 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j644387166406.1637', 'results_file': '/home/dragon/.ansible_async/j644387166406.1637', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.160988 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j302927775104.1668', 'results_file': '/home/dragon/.ansible_async/j302927775104.1668', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161008 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j442749366015.1706', 'results_file': '/home/dragon/.ansible_async/j442749366015.1706', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161023 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j40327559400.1733', 'results_file': '/home/dragon/.ansible_async/j40327559400.1733', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161066 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-27 00:16:40.161081 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j40408098344.1773', 'results_file': '/home/dragon/.ansible_async/j40408098344.1773', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161096 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j575732066107.1798', 'results_file': '/home/dragon/.ansible_async/j575732066107.1798', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161110 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j695288857912.1831', 'results_file': '/home/dragon/.ansible_async/j695288857912.1831', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161124 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j515662415789.1864', 'results_file': '/home/dragon/.ansible_async/j515662415789.1864', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161139 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j487804751536.1897', 'results_file': '/home/dragon/.ansible_async/j487804751536.1897', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161153 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j174321934228.1929', 'results_file': '/home/dragon/.ansible_async/j174321934228.1929', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-03-27 00:16:40.161167 | orchestrator | 2025-03-27 00:16:40.161183 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-27 00:16:40.161211 | orchestrator | ok: [testbed-manager] 2025-03-27 00:16:40.660033 | orchestrator | 2025-03-27 00:16:40.660133 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-27 00:16:40.660168 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:41.002271 | orchestrator | 2025-03-27 00:16:41.002479 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-27 00:16:41.002538 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:41.357242 | orchestrator | 2025-03-27 00:16:41.357395 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-27 00:16:41.357435 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:41.417367 | orchestrator | 2025-03-27 00:16:41.417440 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-27 00:16:41.417472 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:16:41.792573 | orchestrator | 2025-03-27 00:16:41.792666 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-27 00:16:41.792699 | orchestrator | ok: [testbed-manager] 2025-03-27 00:16:41.981509 | orchestrator | 2025-03-27 00:16:41.981595 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-27 00:16:41.981618 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:16:44.006933 | orchestrator | 2025-03-27 00:16:44.007047 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-27 00:16:44.007065 | orchestrator | 2025-03-27 00:16:44.007080 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:16:44.007113 | orchestrator | ok: [testbed-manager] 2025-03-27 00:16:44.326547 | orchestrator | 2025-03-27 00:16:44.326651 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-27 00:16:44.326685 | orchestrator | 2025-03-27 00:16:44.432294 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-27 00:16:44.432406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-27 00:16:45.656141 | orchestrator | 2025-03-27 00:16:45.656257 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-27 00:16:45.656295 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-27 00:16:47.787417 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-27 00:16:47.787538 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-27 00:16:47.787556 | orchestrator | 2025-03-27 00:16:47.787572 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-27 00:16:47.787603 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-27 00:16:48.500827 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-27 00:16:48.500934 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-27 00:16:48.500952 | orchestrator | 2025-03-27 00:16:48.500967 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-27 00:16:48.501000 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:16:49.197158 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:49.197255 | orchestrator | 2025-03-27 00:16:49.197272 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-27 00:16:49.197300 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:16:49.278211 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:49.278261 | orchestrator | 2025-03-27 00:16:49.278277 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-27 00:16:49.278302 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:16:49.709500 | orchestrator | 2025-03-27 00:16:49.709576 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-27 00:16:49.709604 | orchestrator | ok: [testbed-manager] 2025-03-27 00:16:49.826609 | orchestrator | 2025-03-27 00:16:49.826645 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-27 00:16:49.826669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-27 00:16:50.950724 | orchestrator | 2025-03-27 00:16:50.950839 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-27 00:16:50.950872 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:51.864379 | orchestrator | 2025-03-27 00:16:51.864494 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-27 00:16:51.864531 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:55.338393 | orchestrator | 2025-03-27 00:16:55.338504 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-27 00:16:55.338537 | orchestrator | changed: [testbed-manager] 2025-03-27 00:16:55.666498 | orchestrator | 2025-03-27 00:16:55.666553 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-27 00:16:55.666583 | orchestrator | 2025-03-27 00:16:55.793094 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-27 00:16:55.793142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-27 00:16:58.516110 | orchestrator | 2025-03-27 00:16:58.516218 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-27 00:16:58.516250 | orchestrator | ok: [testbed-manager] 2025-03-27 00:16:58.706667 | orchestrator | 2025-03-27 00:16:58.706729 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-27 00:16:58.706756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-27 00:16:59.911696 | orchestrator | 2025-03-27 00:16:59.911813 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-27 00:16:59.911849 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-27 00:17:00.033162 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-27 00:17:00.033197 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-27 00:17:00.033212 | orchestrator | 2025-03-27 00:17:00.033227 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-27 00:17:00.033249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-27 00:17:00.728171 | orchestrator | 2025-03-27 00:17:00.728275 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-27 00:17:00.728308 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-27 00:17:01.450371 | orchestrator | 2025-03-27 00:17:01.450504 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-27 00:17:01.450550 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:17:01.877858 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:01.877958 | orchestrator | 2025-03-27 00:17:01.877973 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-27 00:17:01.877998 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:02.283755 | orchestrator | 2025-03-27 00:17:02.283813 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-27 00:17:02.283834 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:02.361220 | orchestrator | 2025-03-27 00:17:02.361255 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-27 00:17:02.361272 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:03.050762 | orchestrator | 2025-03-27 00:17:03.050852 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-27 00:17:03.050883 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:03.174649 | orchestrator | 2025-03-27 00:17:03.174692 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-27 00:17:03.174717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-27 00:17:04.035591 | orchestrator | 2025-03-27 00:17:04.035699 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-27 00:17:04.035732 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-27 00:17:04.767022 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-27 00:17:04.767137 | orchestrator | 2025-03-27 00:17:04.767156 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-27 00:17:04.767187 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-27 00:17:05.532418 | orchestrator | 2025-03-27 00:17:05.532538 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-27 00:17:05.532575 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:05.619289 | orchestrator | 2025-03-27 00:17:05.619400 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-27 00:17:05.619430 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:06.370633 | orchestrator | 2025-03-27 00:17:06.370746 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-27 00:17:06.370770 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:08.472487 | orchestrator | 2025-03-27 00:17:08.472641 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-27 00:17:08.472680 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:17:15.278156 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:17:15.278294 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:17:15.278315 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:15.278379 | orchestrator | 2025-03-27 00:17:15.278395 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-27 00:17:15.278429 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-27 00:17:15.985632 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-27 00:17:15.985722 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-27 00:17:15.985738 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-27 00:17:15.985752 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-27 00:17:15.985768 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-27 00:17:15.985782 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-27 00:17:15.985796 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-27 00:17:15.985810 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-27 00:17:15.985851 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-27 00:17:15.985866 | orchestrator | 2025-03-27 00:17:15.985881 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-27 00:17:15.985912 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-27 00:17:16.169559 | orchestrator | 2025-03-27 00:17:16.169646 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-27 00:17:16.169679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-27 00:17:16.979014 | orchestrator | 2025-03-27 00:17:16.979103 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-27 00:17:16.979135 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:17.689197 | orchestrator | 2025-03-27 00:17:17.689363 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-27 00:17:17.689401 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:18.521845 | orchestrator | 2025-03-27 00:17:18.521948 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-27 00:17:18.521980 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:24.419575 | orchestrator | 2025-03-27 00:17:24.419687 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-27 00:17:24.419716 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:25.498361 | orchestrator | 2025-03-27 00:17:25.498477 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-27 00:17:25.498509 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:47.848538 | orchestrator | 2025-03-27 00:17:47.848676 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-27 00:17:47.848716 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-27 00:17:47.941177 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:47.941269 | orchestrator | 2025-03-27 00:17:47.941302 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-27 00:17:47.941391 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:48.017148 | orchestrator | 2025-03-27 00:17:48.017229 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-27 00:17:48.017245 | orchestrator | 2025-03-27 00:17:48.017261 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-27 00:17:48.017287 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:48.117683 | orchestrator | 2025-03-27 00:17:48.117767 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-27 00:17:48.117797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-27 00:17:48.992726 | orchestrator | 2025-03-27 00:17:48.992841 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-27 00:17:48.992875 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:49.098516 | orchestrator | 2025-03-27 00:17:49.098617 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-27 00:17:49.098650 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:49.162201 | orchestrator | 2025-03-27 00:17:49.162273 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-27 00:17:49.162303 | orchestrator | ok: [testbed-manager] => { 2025-03-27 00:17:49.888051 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-27 00:17:49.888164 | orchestrator | } 2025-03-27 00:17:49.888182 | orchestrator | 2025-03-27 00:17:49.888197 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-27 00:17:49.888227 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:50.926854 | orchestrator | 2025-03-27 00:17:50.926969 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-27 00:17:50.927007 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:51.028722 | orchestrator | 2025-03-27 00:17:51.028857 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-27 00:17:51.028899 | orchestrator | ok: [testbed-manager] 2025-03-27 00:17:51.105842 | orchestrator | 2025-03-27 00:17:51.105924 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-27 00:17:51.105953 | orchestrator | ok: [testbed-manager] => { 2025-03-27 00:17:51.196361 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-27 00:17:51.196453 | orchestrator | } 2025-03-27 00:17:51.196469 | orchestrator | 2025-03-27 00:17:51.196484 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-27 00:17:51.196526 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:51.278077 | orchestrator | 2025-03-27 00:17:51.278181 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-27 00:17:51.278214 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:51.364997 | orchestrator | 2025-03-27 00:17:51.365060 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-27 00:17:51.365088 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:51.450966 | orchestrator | 2025-03-27 00:17:51.451025 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-27 00:17:51.451051 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:51.527805 | orchestrator | 2025-03-27 00:17:51.527854 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-27 00:17:51.527880 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:51.661214 | orchestrator | 2025-03-27 00:17:51.661287 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-27 00:17:51.661359 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:17:53.012973 | orchestrator | 2025-03-27 00:17:53.013090 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-27 00:17:53.013129 | orchestrator | changed: [testbed-manager] 2025-03-27 00:17:53.141200 | orchestrator | 2025-03-27 00:17:53.141296 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-27 00:17:53.141389 | orchestrator | ok: [testbed-manager] 2025-03-27 00:18:53.215760 | orchestrator | 2025-03-27 00:18:53.215896 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-27 00:18:53.215936 | orchestrator | Pausing for 60 seconds 2025-03-27 00:18:53.335796 | orchestrator | changed: [testbed-manager] 2025-03-27 00:18:53.335831 | orchestrator | 2025-03-27 00:18:53.335847 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-27 00:18:53.335870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-27 00:23:38.272461 | orchestrator | 2025-03-27 00:23:38.272597 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-27 00:23:38.272635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-27 00:23:40.404920 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-27 00:23:40.405036 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-27 00:23:40.405054 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-27 00:23:40.405069 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-27 00:23:40.405084 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-27 00:23:40.405098 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-27 00:23:40.405112 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-27 00:23:40.405126 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-27 00:23:40.405140 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-27 00:23:40.405154 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-27 00:23:40.405168 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-27 00:23:40.405216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-27 00:23:40.405232 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-27 00:23:40.405299 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-27 00:23:40.405315 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-27 00:23:40.405329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-27 00:23:40.405343 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-27 00:23:40.405357 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-27 00:23:40.405384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-27 00:23:40.405399 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-27 00:23:40.405413 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-27 00:23:40.405427 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-27 00:23:40.405441 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-03-27 00:23:40.405455 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-03-27 00:23:40.405470 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-03-27 00:23:40.405486 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-03-27 00:23:40.405502 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:40.405519 | orchestrator | 2025-03-27 00:23:40.405536 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-27 00:23:40.405551 | orchestrator | 2025-03-27 00:23:40.405567 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:23:40.405598 | orchestrator | ok: [testbed-manager] 2025-03-27 00:23:40.522816 | orchestrator | 2025-03-27 00:23:40.522895 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-27 00:23:40.522925 | orchestrator | 2025-03-27 00:23:40.599955 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-27 00:23:40.600017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-27 00:23:42.617662 | orchestrator | 2025-03-27 00:23:42.617776 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-27 00:23:42.617810 | orchestrator | ok: [testbed-manager] 2025-03-27 00:23:42.678662 | orchestrator | 2025-03-27 00:23:42.678691 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-27 00:23:42.678712 | orchestrator | ok: [testbed-manager] 2025-03-27 00:23:42.781842 | orchestrator | 2025-03-27 00:23:42.781897 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-27 00:23:42.781924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-27 00:23:45.917213 | orchestrator | 2025-03-27 00:23:45.917357 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-27 00:23:45.917393 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-27 00:23:46.646941 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-27 00:23:46.647044 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-27 00:23:46.647062 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-27 00:23:46.647077 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-27 00:23:46.647123 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-27 00:23:46.647139 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-27 00:23:46.647154 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-27 00:23:46.647168 | orchestrator | 2025-03-27 00:23:46.647183 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-27 00:23:46.647213 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:46.749394 | orchestrator | 2025-03-27 00:23:46.749456 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-27 00:23:46.749501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-27 00:23:48.205635 | orchestrator | 2025-03-27 00:23:48.205740 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-27 00:23:48.205772 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-27 00:23:48.894333 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-27 00:23:48.894441 | orchestrator | 2025-03-27 00:23:48.894459 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-27 00:23:48.894490 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:48.972845 | orchestrator | 2025-03-27 00:23:48.972903 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-27 00:23:48.972930 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:23:49.050513 | orchestrator | 2025-03-27 00:23:49.050599 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-27 00:23:49.050631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-27 00:23:50.611304 | orchestrator | 2025-03-27 00:23:50.611414 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-27 00:23:50.611447 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:23:51.298903 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:23:51.299017 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:51.299035 | orchestrator | 2025-03-27 00:23:51.299049 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-27 00:23:51.299076 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:51.414913 | orchestrator | 2025-03-27 00:23:51.414983 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-27 00:23:51.415009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-27 00:23:52.101042 | orchestrator | 2025-03-27 00:23:52.101140 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-27 00:23:52.101168 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:23:52.798460 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:52.798562 | orchestrator | 2025-03-27 00:23:52.798580 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-27 00:23:52.798608 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:52.922767 | orchestrator | 2025-03-27 00:23:52.922854 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-27 00:23:52.922883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-27 00:23:53.592502 | orchestrator | 2025-03-27 00:23:53.592612 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-27 00:23:53.592646 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:54.046565 | orchestrator | 2025-03-27 00:23:54.046694 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-27 00:23:54.046742 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:55.376996 | orchestrator | 2025-03-27 00:23:55.377114 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-27 00:23:55.377148 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-27 00:23:56.104914 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-27 00:23:56.105018 | orchestrator | 2025-03-27 00:23:56.105063 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-27 00:23:56.105092 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:56.493992 | orchestrator | 2025-03-27 00:23:56.494150 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-27 00:23:56.494183 | orchestrator | ok: [testbed-manager] 2025-03-27 00:23:56.611206 | orchestrator | 2025-03-27 00:23:56.611293 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-27 00:23:56.611323 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:23:57.319437 | orchestrator | 2025-03-27 00:23:57.319536 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-27 00:23:57.319569 | orchestrator | changed: [testbed-manager] 2025-03-27 00:23:57.394128 | orchestrator | 2025-03-27 00:23:57.394198 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-27 00:23:57.394226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-27 00:23:57.440966 | orchestrator | 2025-03-27 00:23:57.441015 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-27 00:23:57.441039 | orchestrator | ok: [testbed-manager] 2025-03-27 00:23:59.676218 | orchestrator | 2025-03-27 00:23:59.676366 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-27 00:23:59.676392 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-27 00:24:00.459906 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-27 00:24:00.460011 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-27 00:24:00.460027 | orchestrator | 2025-03-27 00:24:00.460043 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-27 00:24:00.460072 | orchestrator | changed: [testbed-manager] 2025-03-27 00:24:00.544931 | orchestrator | 2025-03-27 00:24:00.544961 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-27 00:24:00.544983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-27 00:24:00.609690 | orchestrator | 2025-03-27 00:24:00.609723 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-27 00:24:00.609744 | orchestrator | ok: [testbed-manager] 2025-03-27 00:24:01.368744 | orchestrator | 2025-03-27 00:24:01.368842 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-27 00:24:01.368869 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-27 00:24:01.477342 | orchestrator | 2025-03-27 00:24:01.477406 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-27 00:24:01.477431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-27 00:24:02.292506 | orchestrator | 2025-03-27 00:24:02.292600 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-27 00:24:02.292628 | orchestrator | changed: [testbed-manager] 2025-03-27 00:24:02.972036 | orchestrator | 2025-03-27 00:24:02.972129 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-27 00:24:02.972154 | orchestrator | ok: [testbed-manager] 2025-03-27 00:24:03.027352 | orchestrator | 2025-03-27 00:24:03.027408 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-27 00:24:03.027432 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:24:03.096719 | orchestrator | 2025-03-27 00:24:03.096744 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-27 00:24:03.096759 | orchestrator | ok: [testbed-manager] 2025-03-27 00:24:03.987946 | orchestrator | 2025-03-27 00:24:03.988044 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-27 00:24:03.988074 | orchestrator | changed: [testbed-manager] 2025-03-27 00:24:49.162639 | orchestrator | 2025-03-27 00:24:49.162786 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-27 00:24:49.162833 | orchestrator | changed: [testbed-manager] 2025-03-27 00:24:49.832390 | orchestrator | 2025-03-27 00:24:49.832504 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-27 00:24:49.832567 | orchestrator | ok: [testbed-manager] 2025-03-27 00:24:52.664210 | orchestrator | 2025-03-27 00:24:52.664380 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-27 00:24:52.664418 | orchestrator | changed: [testbed-manager] 2025-03-27 00:24:52.729471 | orchestrator | 2025-03-27 00:24:52.729512 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-27 00:24:52.729535 | orchestrator | ok: [testbed-manager] 2025-03-27 00:24:52.814953 | orchestrator | 2025-03-27 00:24:52.814983 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-27 00:24:52.814998 | orchestrator | 2025-03-27 00:24:52.815012 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-27 00:24:52.815032 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:25:52.887371 | orchestrator | 2025-03-27 00:25:52.887541 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-27 00:25:52.887597 | orchestrator | Pausing for 60 seconds 2025-03-27 00:25:58.906802 | orchestrator | changed: [testbed-manager] 2025-03-27 00:25:58.906925 | orchestrator | 2025-03-27 00:25:58.906945 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-27 00:25:58.906977 | orchestrator | changed: [testbed-manager] 2025-03-27 00:26:40.721727 | orchestrator | 2025-03-27 00:26:40.721818 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-27 00:26:40.721836 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-27 00:26:47.515050 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-27 00:26:47.515182 | orchestrator | changed: [testbed-manager] 2025-03-27 00:26:47.515203 | orchestrator | 2025-03-27 00:26:47.515219 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-27 00:26:47.515304 | orchestrator | changed: [testbed-manager] 2025-03-27 00:26:47.621705 | orchestrator | 2025-03-27 00:26:47.621738 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-27 00:26:47.621762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-27 00:26:47.698127 | orchestrator | 2025-03-27 00:26:47.698158 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-27 00:26:47.698172 | orchestrator | 2025-03-27 00:26:47.698198 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-27 00:26:47.698218 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:26:47.864449 | orchestrator | 2025-03-27 00:26:47.864482 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:26:47.864497 | orchestrator | testbed-manager : ok=103 changed=55 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-27 00:26:47.864512 | orchestrator | 2025-03-27 00:26:47.864532 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-27 00:26:47.873311 | orchestrator | + deactivate 2025-03-27 00:26:47.873339 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-27 00:26:47.873355 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-27 00:26:47.873370 | orchestrator | + export PATH 2025-03-27 00:26:47.873385 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-27 00:26:47.873400 | orchestrator | + '[' -n '' ']' 2025-03-27 00:26:47.873414 | orchestrator | + hash -r 2025-03-27 00:26:47.873429 | orchestrator | + '[' -n '' ']' 2025-03-27 00:26:47.873444 | orchestrator | + unset VIRTUAL_ENV 2025-03-27 00:26:47.873458 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-27 00:26:47.873473 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-27 00:26:47.873488 | orchestrator | + unset -f deactivate 2025-03-27 00:26:47.873504 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-27 00:26:47.873525 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-27 00:26:47.874448 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-27 00:26:47.874471 | orchestrator | + local max_attempts=60 2025-03-27 00:26:47.874485 | orchestrator | + local name=ceph-ansible 2025-03-27 00:26:47.874499 | orchestrator | + local attempt_num=1 2025-03-27 00:26:47.874562 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-27 00:26:47.909822 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:26:47.910913 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-27 00:26:47.910942 | orchestrator | + local max_attempts=60 2025-03-27 00:26:47.910957 | orchestrator | + local name=kolla-ansible 2025-03-27 00:26:47.910972 | orchestrator | + local attempt_num=1 2025-03-27 00:26:47.911002 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-27 00:26:47.947572 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:26:47.948823 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-27 00:26:47.948849 | orchestrator | + local max_attempts=60 2025-03-27 00:26:47.948864 | orchestrator | + local name=osism-ansible 2025-03-27 00:26:47.948879 | orchestrator | + local attempt_num=1 2025-03-27 00:26:47.948897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-27 00:26:47.977362 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:26:48.784753 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-27 00:26:48.784830 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-27 00:26:48.784850 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-27 00:26:48.846643 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-27 00:26:49.083019 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-27 00:26:49.083096 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-27 00:26:49.083123 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-27 00:26:49.092513 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092535 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092546 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-03-27 00:26:49.092576 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-03-27 00:26:49.092589 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092604 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092616 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092628 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 50 seconds (healthy) 2025-03-27 00:26:49.092639 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092651 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-03-27 00:26:49.092662 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092674 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092709 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-03-27 00:26:49.092721 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092732 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092744 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092755 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-03-27 00:26:49.092771 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-27 00:26:49.239297 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-27 00:26:49.250183 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-03-27 00:26:49.250207 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-03-27 00:26:49.250219 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-03-27 00:26:49.250256 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-03-27 00:26:49.250272 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-27 00:26:49.308495 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-27 00:26:49.315430 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-27 00:26:49.315454 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-27 00:26:51.008292 | orchestrator | 2025-03-27 00:26:51 | INFO  | Task bf44c117-2e7b-40db-b81a-dcebd6ab0ac5 (resolvconf) was prepared for execution. 2025-03-27 00:26:54.136677 | orchestrator | 2025-03-27 00:26:51 | INFO  | It takes a moment until task bf44c117-2e7b-40db-b81a-dcebd6ab0ac5 (resolvconf) has been started and output is visible here. 2025-03-27 00:26:54.136807 | orchestrator | 2025-03-27 00:26:54.138628 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-27 00:26:54.138662 | orchestrator | 2025-03-27 00:26:54.138843 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:26:54.139409 | orchestrator | Thursday 27 March 2025 00:26:54 +0000 (0:00:00.092) 0:00:00.092 ******** 2025-03-27 00:26:58.523697 | orchestrator | ok: [testbed-manager] 2025-03-27 00:26:58.524087 | orchestrator | 2025-03-27 00:26:58.524131 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-27 00:26:58.525012 | orchestrator | Thursday 27 March 2025 00:26:58 +0000 (0:00:04.390) 0:00:04.482 ******** 2025-03-27 00:26:58.600265 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:26:58.600539 | orchestrator | 2025-03-27 00:26:58.600652 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-27 00:26:58.600686 | orchestrator | Thursday 27 March 2025 00:26:58 +0000 (0:00:00.073) 0:00:04.556 ******** 2025-03-27 00:26:58.702552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-27 00:26:58.702940 | orchestrator | 2025-03-27 00:26:58.703410 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-27 00:26:58.704052 | orchestrator | Thursday 27 March 2025 00:26:58 +0000 (0:00:00.105) 0:00:04.661 ******** 2025-03-27 00:26:58.800523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-27 00:26:58.801509 | orchestrator | 2025-03-27 00:26:58.807374 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-27 00:27:00.027032 | orchestrator | Thursday 27 March 2025 00:26:58 +0000 (0:00:00.098) 0:00:04.759 ******** 2025-03-27 00:27:00.027151 | orchestrator | ok: [testbed-manager] 2025-03-27 00:27:00.028126 | orchestrator | 2025-03-27 00:27:00.028156 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-27 00:27:00.028702 | orchestrator | Thursday 27 March 2025 00:27:00 +0000 (0:00:01.224) 0:00:05.984 ******** 2025-03-27 00:27:00.087545 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:00.087895 | orchestrator | 2025-03-27 00:27:00.088343 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-27 00:27:00.088739 | orchestrator | Thursday 27 March 2025 00:27:00 +0000 (0:00:00.063) 0:00:06.048 ******** 2025-03-27 00:27:00.605203 | orchestrator | ok: [testbed-manager] 2025-03-27 00:27:00.606345 | orchestrator | 2025-03-27 00:27:00.606380 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-27 00:27:00.606926 | orchestrator | Thursday 27 March 2025 00:27:00 +0000 (0:00:00.515) 0:00:06.563 ******** 2025-03-27 00:27:00.683078 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:00.685007 | orchestrator | 2025-03-27 00:27:00.685036 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-27 00:27:00.685387 | orchestrator | Thursday 27 March 2025 00:27:00 +0000 (0:00:00.077) 0:00:06.641 ******** 2025-03-27 00:27:01.275915 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:02.553700 | orchestrator | 2025-03-27 00:27:02.553793 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-27 00:27:02.553811 | orchestrator | Thursday 27 March 2025 00:27:01 +0000 (0:00:00.594) 0:00:07.236 ******** 2025-03-27 00:27:02.553843 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:02.554376 | orchestrator | 2025-03-27 00:27:02.555636 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-27 00:27:02.556568 | orchestrator | Thursday 27 March 2025 00:27:02 +0000 (0:00:01.275) 0:00:08.511 ******** 2025-03-27 00:27:03.600577 | orchestrator | ok: [testbed-manager] 2025-03-27 00:27:03.601080 | orchestrator | 2025-03-27 00:27:03.601809 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-27 00:27:03.602701 | orchestrator | Thursday 27 March 2025 00:27:03 +0000 (0:00:01.046) 0:00:09.557 ******** 2025-03-27 00:27:03.688886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-27 00:27:03.689325 | orchestrator | 2025-03-27 00:27:03.690157 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-27 00:27:03.691011 | orchestrator | Thursday 27 March 2025 00:27:03 +0000 (0:00:00.089) 0:00:09.647 ******** 2025-03-27 00:27:04.920019 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:04.920510 | orchestrator | 2025-03-27 00:27:04.920840 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:27:04.920873 | orchestrator | 2025-03-27 00:27:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:27:04.921323 | orchestrator | 2025-03-27 00:27:04 | INFO  | Please wait and do not abort execution. 2025-03-27 00:27:04.921592 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:27:04.922850 | orchestrator | 2025-03-27 00:27:04.923035 | orchestrator | Thursday 27 March 2025 00:27:04 +0000 (0:00:01.231) 0:00:10.879 ******** 2025-03-27 00:27:04.923459 | orchestrator | =============================================================================== 2025-03-27 00:27:04.924050 | orchestrator | Gathering Facts --------------------------------------------------------- 4.39s 2025-03-27 00:27:04.924739 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.28s 2025-03-27 00:27:04.925142 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.23s 2025-03-27 00:27:04.925439 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2025-03-27 00:27:04.925962 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.05s 2025-03-27 00:27:04.926748 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2025-03-27 00:27:04.927368 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2025-03-27 00:27:04.927527 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-03-27 00:27:04.928349 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-03-27 00:27:04.928757 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-03-27 00:27:04.929315 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-03-27 00:27:04.929864 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-03-27 00:27:04.930326 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-03-27 00:27:05.405316 | orchestrator | + osism apply sshconfig 2025-03-27 00:27:06.889795 | orchestrator | 2025-03-27 00:27:06 | INFO  | Task ecbcfabc-837a-468d-b764-9afe91bfc8a9 (sshconfig) was prepared for execution. 2025-03-27 00:27:10.435501 | orchestrator | 2025-03-27 00:27:06 | INFO  | It takes a moment until task ecbcfabc-837a-468d-b764-9afe91bfc8a9 (sshconfig) has been started and output is visible here. 2025-03-27 00:27:10.435643 | orchestrator | 2025-03-27 00:27:10.437968 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-27 00:27:10.441286 | orchestrator | 2025-03-27 00:27:10.441323 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-27 00:27:10.442477 | orchestrator | Thursday 27 March 2025 00:27:10 +0000 (0:00:00.132) 0:00:00.132 ******** 2025-03-27 00:27:11.037457 | orchestrator | ok: [testbed-manager] 2025-03-27 00:27:11.037926 | orchestrator | 2025-03-27 00:27:11.038710 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-27 00:27:11.038979 | orchestrator | Thursday 27 March 2025 00:27:11 +0000 (0:00:00.604) 0:00:00.737 ******** 2025-03-27 00:27:11.571960 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:11.572699 | orchestrator | 2025-03-27 00:27:11.573512 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-27 00:27:11.574186 | orchestrator | Thursday 27 March 2025 00:27:11 +0000 (0:00:00.531) 0:00:01.268 ******** 2025-03-27 00:27:17.730927 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-27 00:27:17.731146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-27 00:27:17.731173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-27 00:27:17.731195 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-27 00:27:17.732294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-27 00:27:17.732955 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-27 00:27:17.734266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-27 00:27:17.736431 | orchestrator | 2025-03-27 00:27:17.737271 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-27 00:27:17.738152 | orchestrator | Thursday 27 March 2025 00:27:17 +0000 (0:00:06.158) 0:00:07.426 ******** 2025-03-27 00:27:17.803201 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:17.804019 | orchestrator | 2025-03-27 00:27:17.805337 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-27 00:27:17.807842 | orchestrator | Thursday 27 March 2025 00:27:17 +0000 (0:00:00.076) 0:00:07.503 ******** 2025-03-27 00:27:18.416929 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:18.418304 | orchestrator | 2025-03-27 00:27:18.419385 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:27:18.421121 | orchestrator | 2025-03-27 00:27:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:27:18.421151 | orchestrator | 2025-03-27 00:27:18 | INFO  | Please wait and do not abort execution. 2025-03-27 00:27:18.421172 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:27:18.421916 | orchestrator | 2025-03-27 00:27:18.422778 | orchestrator | Thursday 27 March 2025 00:27:18 +0000 (0:00:00.613) 0:00:08.116 ******** 2025-03-27 00:27:18.423631 | orchestrator | =============================================================================== 2025-03-27 00:27:18.424689 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.16s 2025-03-27 00:27:18.425500 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-03-27 00:27:18.426297 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-03-27 00:27:18.426765 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-03-27 00:27:18.427460 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-03-27 00:27:18.892031 | orchestrator | + osism apply known-hosts 2025-03-27 00:27:20.383487 | orchestrator | 2025-03-27 00:27:20 | INFO  | Task 25419032-d85f-48f6-9319-aa6036e24eb1 (known-hosts) was prepared for execution. 2025-03-27 00:27:23.545880 | orchestrator | 2025-03-27 00:27:20 | INFO  | It takes a moment until task 25419032-d85f-48f6-9319-aa6036e24eb1 (known-hosts) has been started and output is visible here. 2025-03-27 00:27:23.546071 | orchestrator | 2025-03-27 00:27:23.547702 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-27 00:27:23.547748 | orchestrator | 2025-03-27 00:27:23.549012 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-27 00:27:23.549315 | orchestrator | Thursday 27 March 2025 00:27:23 +0000 (0:00:00.114) 0:00:00.114 ******** 2025-03-27 00:27:29.822514 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-27 00:27:29.823930 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-27 00:27:29.823979 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-27 00:27:29.825078 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-27 00:27:29.828759 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-27 00:27:29.830549 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-27 00:27:29.834976 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-27 00:27:29.835014 | orchestrator | 2025-03-27 00:27:29.835768 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-27 00:27:29.836399 | orchestrator | Thursday 27 March 2025 00:27:29 +0000 (0:00:06.280) 0:00:06.395 ******** 2025-03-27 00:27:30.012596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-27 00:27:30.012746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-27 00:27:30.013532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-27 00:27:30.014264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-27 00:27:30.015592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-27 00:27:30.016055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-27 00:27:30.016087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-27 00:27:30.016577 | orchestrator | 2025-03-27 00:27:30.017060 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:30.017362 | orchestrator | Thursday 27 March 2025 00:27:30 +0000 (0:00:00.191) 0:00:06.586 ******** 2025-03-27 00:27:31.334392 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp53pBNmcJB6ywvNXeQe+nHCnwk61eJdeTiNqx4ZzifMdeSYPYtUAGBe+eygtjis/u7Bv+huX+k3B35Gx/ghTEVKiWR4V3dXQJOA89QjlaW0kZXVsGTMjiWkPdr4kZIXaAr4819heFxLic6IgHtT2Yhk53miy2R+uX+PKHG551NsiCYbcekPpObjWGaq5M5ymMrNc6mdv5lDwaIvui7pMR2gZPIGQ42cZVT8dXcjky6xBGzXrRKHkUYTugPAHlNRBaxPrIDILcjqsWbfpn+KYXye8Ap/eS0S6cS3xbx6cYwJHDG/+TvXL8QanYKQEixUvriNH31Vuvsb7koCUyqvuK9+WPtbFZf0mZPePDgxPMaf2uSltOG9ATbrgiRyLNs/RtzjW8tb+BTNfKPrswBjmlez56FVv05z06g3KUXaKuEJG5A8ATVnIKuNQE4RdcT+/LapPjlnrkwh4QswfCbiVXyJdsWkLPNy7/a43+n15QhbIAR2zuUz5gp8oo0BzCpCc=) 2025-03-27 00:27:31.335908 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGKRel1jZHqnOr0EH7mRe3xqFOsI6sog6De9gfxoGd78Raukj00Ev47E0kuNAeU85xIDysN6ZkIY+jquoG4RsE=) 2025-03-27 00:27:31.335952 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPJ+X9Au7oIW24KO0oeHFskhXq6ufIr1hNMnMfmqyi7) 2025-03-27 00:27:31.336710 | orchestrator | 2025-03-27 00:27:31.337741 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:31.338872 | orchestrator | Thursday 27 March 2025 00:27:31 +0000 (0:00:01.318) 0:00:07.905 ******** 2025-03-27 00:27:32.486779 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjtn6PtXbcTjHEGR0vzxmydGCswzfgi9bRS6OWZqGFqs4EsoG+m+5sC6mIOBt76oLQ4WZeHt5Zl2CfroPvm4yfGOO0gV6/pj503JYcDQ7AzT4+Y04Modsfq9sHofZ6BRlYolbiknvTdnDwjUg4jh3ahhRk148pItSGBW2fXqLSpjO3or80ApS/v5f1j9QVgJtjBrP1gpp8axZSj2iww0CJFlAtZ2fH3wQdirCH+WDPzuhQV+cDi7D5rziLGnpLH9t3Slq1LEj+vN2dOmolD4rY6ATYHGiSC8KgXu04xqSlQKyuCaZWWRPgiG/M6aXM9Wc0w0H1CZc8dsuuIvDaBCQd/zLEMVOoMAFgroavOOUof4dOhXTK3Q8crJ8qXYXvI2r8ltxOLF49Eie0+nUyzVYpiVQz2W2lVAFiXPEKhK953HRp/yXI6rDW89FvGO1xMWGG1/5n5qrB+xUdvhwbrasaP0kwMjVxIylP1HZZXD35a3PewtHJUTKdjFN+FCsy9P0=) 2025-03-27 00:27:32.487702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJNyAeF8ynsCrDP0B30CUsAlWaPjrkUMeTIm1GfXap096Gi6fiCDOyuNldDP30DdYFJX6yUyV6NmMEekegDcv6k=) 2025-03-27 00:27:32.489967 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJjkqQFT4TIGrhe0k0MvRkzSxjdCqKwpH9IiUwakJ2X) 2025-03-27 00:27:32.490492 | orchestrator | 2025-03-27 00:27:32.490862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:32.491181 | orchestrator | Thursday 27 March 2025 00:27:32 +0000 (0:00:01.151) 0:00:09.056 ******** 2025-03-27 00:27:33.569777 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiXoIBAk+GSlCcm0yywNkWvQtPPLvJsoLiBj5eKebmffE/RfQAas6n+dPc6hjLQkwfMLRRL+4VHkAk9nLpzHCn0XnD7W7gdvjkWhKFnDo8sJ8+10l9Ko1+pi/zfqdb/w2BWmnc35Y0E8suoRUDJdX3IMM+ev8IiWKncKjtWrH57ax8onUFSOrsl+U/60DKS7FL6mndUN8mHq7Eu9oe8OuAMQQBF8j9ngwrxdOGb/nAqXgF4rjk+8WSsFRQo8l+e1vGo3cfJI17/S8nPRiVykbA3mwmoJkKRTTR8wSVZSrdbievX6TDTM4tZ7/peUfqZ2+cRU5pEReaFhswWqIh/MObDyR8RwQ+9OPVWAyDpFAZEzTlYWisTShiqIpl0o1rK8ot2JhQuBg6PPSDLVSSj2rEYZnvM1U4YkSBFwxDxMBxHQ3a6GaPEXiIawT67NM2umssz3C45fhVl0BGDrFdp32PaNsWL7XTgyBOblrnNiI2RSRe2nrhtSWcb+TZRaGACx0=) 2025-03-27 00:27:33.571484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF78v8od4gcKkDd2LCQM8FQuksZ3wp9W7WUAQz55pTN6jOqU1fF+vUM+H5PLXVrluc3r9ejaJLIdvd2V3CvOqq0=) 2025-03-27 00:27:33.572008 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwCw3wm1qVM0IypzfAbMyd9I1Mw3LewwMkKnzHxoXn5) 2025-03-27 00:27:33.573551 | orchestrator | 2025-03-27 00:27:33.574107 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:33.575009 | orchestrator | Thursday 27 March 2025 00:27:33 +0000 (0:00:01.085) 0:00:10.142 ******** 2025-03-27 00:27:34.718171 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc5RHLrgO3MmTrLMGQ4P9rUDE6mIhi1CuO/bSImY8JBo2MyF3h05o+5Tbr5PCESVLysXaKkpsrW4Yn8xFb5t0F8D2Te7iCjtE8iLp8yZn7mluBLjQIO1e4XmaatU7ddGmmfS1051D+pW0orT6JZUnCoyA3x/xFf1WzfdXFDjY6GfKOhikZiMgtR1dqd497e9A2qDyvZhQtB1zGyZcSTFcRt42FToN9HV5Sz1VM++7O+Nfu+9CJI2MPlknl3PgwpT7n+uqc0HFBnIOHAKIuZ3LiMnioZMjFTuPFB6qHIqU0mvR2xYrgjZatPzuOGGAlrE3/9VLkAyai3aEq8KETIEXcO5r++L/fmoQuBNyByP5+9JmXYaP5P3Q39RiUbEGBMwaPYt3KGNSGJT3o/Kk7dQqlL6beRtliZ7tKvMbMGuRIvMs6lgGMzt/UysavI28T/kv5StQfmHAzbLpq0QHWA+xtKhtA13MhXVxHJ3r97R0q3kzGQ7XDWISu3Qvnd6UBbFM=) 2025-03-27 00:27:34.718956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEP43l76pZifGhorQFvuUkbdk6D3FIlLp5Gwwz4lXdge9H6YDK5CHuvXYwQwFpJmlGOyToIt+VGWx3cAUoodiRA=) 2025-03-27 00:27:34.719773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINzEcZfOMRhrf947lffpWbgJUj57fUFfnROCuQZBMXLu) 2025-03-27 00:27:34.720482 | orchestrator | 2025-03-27 00:27:34.721299 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:34.721861 | orchestrator | Thursday 27 March 2025 00:27:34 +0000 (0:00:01.147) 0:00:11.289 ******** 2025-03-27 00:27:35.880305 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdRlE8AS2WZZPP5sk0XNPXQB/ovFpYa5qlCUejObtt/zYoEBqIus9YpNbhibDBD/1esaH4kiYpktXIhTi+P0bInfZc0/tPYCkoFCcBfkaU3jE1zor/bVYTOJLNXVJdOVSEFukgWi3GesRAhBEVS52SZoHeGYnmcblkVnOfzrqpP39ATgzNPjN3uktIbQe96gcI4yOPENWeVZRugG9yAkyBj+zkc550fPD7hISfcAXh8LotdCq71yjP9+n4RT6Ex72SAQ9OTzzN8dzxQc78xMWXa/ypJKQpPJBWwC/0ezg8LdY9H4aaQvdhu1R68Txxfr88UExeLG2FrflZypdTYNu3eJ8ad6V8q3OOn5mFpFW0hf7FTwgsx/g1bZyM2QUnBGvCKpeg4lX+b3kcl5aIhtkKgI1TY0hhHjoe5Uwr5/o5OHJoX41az0LTMXzhbAU1/XlbOSlFKiwJZFZvP+d1RKMl298wUtfuvb6j70Id6SVPC++iqOrk4IW7j3XGlaw+4/8=) 2025-03-27 00:27:35.881101 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAle3ZUNVwq0AdyeXDbh04TFxeMMqdZd9/y4SotjWk5jWh47Gb4ApEO7sNnVB7smB2gWDXAjwLiwQU79bhNnx9U=) 2025-03-27 00:27:35.881156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoUFIVngMGLshLO6jFR+5IABv56Log5yZN+CnuqvuSg) 2025-03-27 00:27:35.881635 | orchestrator | 2025-03-27 00:27:35.881672 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:35.882161 | orchestrator | Thursday 27 March 2025 00:27:35 +0000 (0:00:01.163) 0:00:12.452 ******** 2025-03-27 00:27:37.047056 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXqc/01lv1bG4Mu9/JWIW11Pi5k76G0HqfUqoui31JSjpj+02oOpBDWXwFUfGVa5EWKohtfHUk7zGhpZx4E8ByZGO0CJIMiWl3rZDbDI89Vi/r3v6pZ2P1nI19uFCFtXVw2LfNh1hwvBJqspjQSDDdgrT/2Z+kBOiu8lttBZmmgMxRQLIgbJ9QQrquCzTgT+smgh+t6uiu3rMNaLVSGXRNdI1KgskNUd+eDc5985pt/ofP4ipoREmAUHHxsUD2NI68/csbsHxYmbtLciAShShUSpyCZpJoa6OekZawffwyZ5dBbGqLOpUQMHKV2BjvcMQqiu0n03lMG/iJPMFxTfoAnEYkt09xXvnGjw+jzSprMSpF2nDxw7Miw3X2r40rpT1E0mG4RhhQecSTLdUzUoCOfJCdCdmgm1WKcmM41BDZywBAogIG5bh7AcSGPvyPae2+3A/bylG6udFN417Q4WbZeJlQCGsfJEKQwVxuMOhm5UtkPj4h9yPbe+Cp9VGtqTU=) 2025-03-27 00:27:37.048513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8rrymHzOgRGSTyUvEYyqInoQvWivPUwUIdJzorjSb9IRKi95RMqseN9BsalTtG9db/xp1ZzKPlgt8nS8y2d6E=) 2025-03-27 00:27:37.049180 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBO8cgg/YluVAFbQpGv0b+V1YnrgeIqFqOv2LbWcnx7M) 2025-03-27 00:27:37.049259 | orchestrator | 2025-03-27 00:27:37.049280 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:37.049835 | orchestrator | Thursday 27 March 2025 00:27:37 +0000 (0:00:01.164) 0:00:13.617 ******** 2025-03-27 00:27:38.216009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYHZRvVuXGoUD28ItYWuv/W9SB+Ez2frfAc47b0mOT1bTQMJfy1YmOPaVJsLHmnabR/b9Ik1xynoDjeh5RH0lhdntittJkxlZTSsotq4CQeHmjago0Ke5xwu8FpLxlK8+KM0ZnewgjlQTzVf0vqF6V1wah5PzHy+0eyDeCJqfdYsX+hBHqlfT223c0e66uJmKeFBa88nVtBa5KF9mtNqh9SMRK42anWPrq80mPt6PuuvIKMRIBYYo6s5NIexnNzMabLDi7+th5CGzbK/bGn8MPJRHWQvy2dUWCRFFkJPxaLqA0nCCk7b08hmDWdvt0yrCzeW5DbPEnhg2uEEmvDQKRNpHBGXoq02FNmCYW/6GM5rIM50JpzdrpRDkZ3smsaWY1eSrfL4OyzicNEVpdovte6NkZ0o5VcWrBke71RcL/0H8e4vxnuHle/Wl2G+KdiEvW2XqWl38TyrohPYy+am8IaumdH/3OaMWXdlP4f6kXdJT/dnAfHQ4ri3SSDR9Why8=) 2025-03-27 00:27:38.216659 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvIk31CvukGLHXaezHwYaD+C5dx2BO1FkZnHqPGi/IOMXYZgYkZadhUj0y0g+CUqgqIIv8NufPTV0+gY5eB8uI=) 2025-03-27 00:27:38.216694 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCL+/BVOXTDA4CgL0PC1d9di+Oxha1bZeQ4KlMgw+dZ) 2025-03-27 00:27:38.217858 | orchestrator | 2025-03-27 00:27:38.219094 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-27 00:27:38.220868 | orchestrator | Thursday 27 March 2025 00:27:38 +0000 (0:00:01.171) 0:00:14.789 ******** 2025-03-27 00:27:43.801120 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-27 00:27:43.801720 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-27 00:27:43.803410 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-27 00:27:43.805763 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-27 00:27:43.806476 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-27 00:27:43.807711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-27 00:27:43.808470 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-27 00:27:43.809115 | orchestrator | 2025-03-27 00:27:43.809862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-27 00:27:43.810482 | orchestrator | Thursday 27 March 2025 00:27:43 +0000 (0:00:05.584) 0:00:20.374 ******** 2025-03-27 00:27:43.972647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-27 00:27:43.974270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-27 00:27:43.974306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-27 00:27:43.975649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-27 00:27:43.977474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-27 00:27:43.978396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-27 00:27:43.978449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-27 00:27:43.979063 | orchestrator | 2025-03-27 00:27:43.979895 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:43.980543 | orchestrator | Thursday 27 March 2025 00:27:43 +0000 (0:00:00.172) 0:00:20.547 ******** 2025-03-27 00:27:45.120119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp53pBNmcJB6ywvNXeQe+nHCnwk61eJdeTiNqx4ZzifMdeSYPYtUAGBe+eygtjis/u7Bv+huX+k3B35Gx/ghTEVKiWR4V3dXQJOA89QjlaW0kZXVsGTMjiWkPdr4kZIXaAr4819heFxLic6IgHtT2Yhk53miy2R+uX+PKHG551NsiCYbcekPpObjWGaq5M5ymMrNc6mdv5lDwaIvui7pMR2gZPIGQ42cZVT8dXcjky6xBGzXrRKHkUYTugPAHlNRBaxPrIDILcjqsWbfpn+KYXye8Ap/eS0S6cS3xbx6cYwJHDG/+TvXL8QanYKQEixUvriNH31Vuvsb7koCUyqvuK9+WPtbFZf0mZPePDgxPMaf2uSltOG9ATbrgiRyLNs/RtzjW8tb+BTNfKPrswBjmlez56FVv05z06g3KUXaKuEJG5A8ATVnIKuNQE4RdcT+/LapPjlnrkwh4QswfCbiVXyJdsWkLPNy7/a43+n15QhbIAR2zuUz5gp8oo0BzCpCc=) 2025-03-27 00:27:45.121486 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGKRel1jZHqnOr0EH7mRe3xqFOsI6sog6De9gfxoGd78Raukj00Ev47E0kuNAeU85xIDysN6ZkIY+jquoG4RsE=) 2025-03-27 00:27:45.122566 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDPJ+X9Au7oIW24KO0oeHFskhXq6ufIr1hNMnMfmqyi7) 2025-03-27 00:27:45.123161 | orchestrator | 2025-03-27 00:27:45.123845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:45.124424 | orchestrator | Thursday 27 March 2025 00:27:45 +0000 (0:00:01.145) 0:00:21.692 ******** 2025-03-27 00:27:46.260443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjtn6PtXbcTjHEGR0vzxmydGCswzfgi9bRS6OWZqGFqs4EsoG+m+5sC6mIOBt76oLQ4WZeHt5Zl2CfroPvm4yfGOO0gV6/pj503JYcDQ7AzT4+Y04Modsfq9sHofZ6BRlYolbiknvTdnDwjUg4jh3ahhRk148pItSGBW2fXqLSpjO3or80ApS/v5f1j9QVgJtjBrP1gpp8axZSj2iww0CJFlAtZ2fH3wQdirCH+WDPzuhQV+cDi7D5rziLGnpLH9t3Slq1LEj+vN2dOmolD4rY6ATYHGiSC8KgXu04xqSlQKyuCaZWWRPgiG/M6aXM9Wc0w0H1CZc8dsuuIvDaBCQd/zLEMVOoMAFgroavOOUof4dOhXTK3Q8crJ8qXYXvI2r8ltxOLF49Eie0+nUyzVYpiVQz2W2lVAFiXPEKhK953HRp/yXI6rDW89FvGO1xMWGG1/5n5qrB+xUdvhwbrasaP0kwMjVxIylP1HZZXD35a3PewtHJUTKdjFN+FCsy9P0=) 2025-03-27 00:27:46.261542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJNyAeF8ynsCrDP0B30CUsAlWaPjrkUMeTIm1GfXap096Gi6fiCDOyuNldDP30DdYFJX6yUyV6NmMEekegDcv6k=) 2025-03-27 00:27:46.261580 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJjkqQFT4TIGrhe0k0MvRkzSxjdCqKwpH9IiUwakJ2X) 2025-03-27 00:27:46.261604 | orchestrator | 2025-03-27 00:27:46.261883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:46.263514 | orchestrator | Thursday 27 March 2025 00:27:46 +0000 (0:00:01.139) 0:00:22.832 ******** 2025-03-27 00:27:47.396179 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF78v8od4gcKkDd2LCQM8FQuksZ3wp9W7WUAQz55pTN6jOqU1fF+vUM+H5PLXVrluc3r9ejaJLIdvd2V3CvOqq0=) 2025-03-27 00:27:47.397430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiXoIBAk+GSlCcm0yywNkWvQtPPLvJsoLiBj5eKebmffE/RfQAas6n+dPc6hjLQkwfMLRRL+4VHkAk9nLpzHCn0XnD7W7gdvjkWhKFnDo8sJ8+10l9Ko1+pi/zfqdb/w2BWmnc35Y0E8suoRUDJdX3IMM+ev8IiWKncKjtWrH57ax8onUFSOrsl+U/60DKS7FL6mndUN8mHq7Eu9oe8OuAMQQBF8j9ngwrxdOGb/nAqXgF4rjk+8WSsFRQo8l+e1vGo3cfJI17/S8nPRiVykbA3mwmoJkKRTTR8wSVZSrdbievX6TDTM4tZ7/peUfqZ2+cRU5pEReaFhswWqIh/MObDyR8RwQ+9OPVWAyDpFAZEzTlYWisTShiqIpl0o1rK8ot2JhQuBg6PPSDLVSSj2rEYZnvM1U4YkSBFwxDxMBxHQ3a6GaPEXiIawT67NM2umssz3C45fhVl0BGDrFdp32PaNsWL7XTgyBOblrnNiI2RSRe2nrhtSWcb+TZRaGACx0=) 2025-03-27 00:27:47.397708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwCw3wm1qVM0IypzfAbMyd9I1Mw3LewwMkKnzHxoXn5) 2025-03-27 00:27:47.398784 | orchestrator | 2025-03-27 00:27:47.399596 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:47.400100 | orchestrator | Thursday 27 March 2025 00:27:47 +0000 (0:00:01.136) 0:00:23.968 ******** 2025-03-27 00:27:48.538763 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEP43l76pZifGhorQFvuUkbdk6D3FIlLp5Gwwz4lXdge9H6YDK5CHuvXYwQwFpJmlGOyToIt+VGWx3cAUoodiRA=) 2025-03-27 00:27:48.540535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc5RHLrgO3MmTrLMGQ4P9rUDE6mIhi1CuO/bSImY8JBo2MyF3h05o+5Tbr5PCESVLysXaKkpsrW4Yn8xFb5t0F8D2Te7iCjtE8iLp8yZn7mluBLjQIO1e4XmaatU7ddGmmfS1051D+pW0orT6JZUnCoyA3x/xFf1WzfdXFDjY6GfKOhikZiMgtR1dqd497e9A2qDyvZhQtB1zGyZcSTFcRt42FToN9HV5Sz1VM++7O+Nfu+9CJI2MPlknl3PgwpT7n+uqc0HFBnIOHAKIuZ3LiMnioZMjFTuPFB6qHIqU0mvR2xYrgjZatPzuOGGAlrE3/9VLkAyai3aEq8KETIEXcO5r++L/fmoQuBNyByP5+9JmXYaP5P3Q39RiUbEGBMwaPYt3KGNSGJT3o/Kk7dQqlL6beRtliZ7tKvMbMGuRIvMs6lgGMzt/UysavI28T/kv5StQfmHAzbLpq0QHWA+xtKhtA13MhXVxHJ3r97R0q3kzGQ7XDWISu3Qvnd6UBbFM=) 2025-03-27 00:27:48.540592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINzEcZfOMRhrf947lffpWbgJUj57fUFfnROCuQZBMXLu) 2025-03-27 00:27:48.540617 | orchestrator | 2025-03-27 00:27:48.541139 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:48.541648 | orchestrator | Thursday 27 March 2025 00:27:48 +0000 (0:00:01.143) 0:00:25.112 ******** 2025-03-27 00:27:49.662424 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdRlE8AS2WZZPP5sk0XNPXQB/ovFpYa5qlCUejObtt/zYoEBqIus9YpNbhibDBD/1esaH4kiYpktXIhTi+P0bInfZc0/tPYCkoFCcBfkaU3jE1zor/bVYTOJLNXVJdOVSEFukgWi3GesRAhBEVS52SZoHeGYnmcblkVnOfzrqpP39ATgzNPjN3uktIbQe96gcI4yOPENWeVZRugG9yAkyBj+zkc550fPD7hISfcAXh8LotdCq71yjP9+n4RT6Ex72SAQ9OTzzN8dzxQc78xMWXa/ypJKQpPJBWwC/0ezg8LdY9H4aaQvdhu1R68Txxfr88UExeLG2FrflZypdTYNu3eJ8ad6V8q3OOn5mFpFW0hf7FTwgsx/g1bZyM2QUnBGvCKpeg4lX+b3kcl5aIhtkKgI1TY0hhHjoe5Uwr5/o5OHJoX41az0LTMXzhbAU1/XlbOSlFKiwJZFZvP+d1RKMl298wUtfuvb6j70Id6SVPC++iqOrk4IW7j3XGlaw+4/8=) 2025-03-27 00:27:49.662769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAle3ZUNVwq0AdyeXDbh04TFxeMMqdZd9/y4SotjWk5jWh47Gb4ApEO7sNnVB7smB2gWDXAjwLiwQU79bhNnx9U=) 2025-03-27 00:27:49.663399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoUFIVngMGLshLO6jFR+5IABv56Log5yZN+CnuqvuSg) 2025-03-27 00:27:49.664331 | orchestrator | 2025-03-27 00:27:49.664604 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:49.665154 | orchestrator | Thursday 27 March 2025 00:27:49 +0000 (0:00:01.120) 0:00:26.232 ******** 2025-03-27 00:27:50.807297 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXqc/01lv1bG4Mu9/JWIW11Pi5k76G0HqfUqoui31JSjpj+02oOpBDWXwFUfGVa5EWKohtfHUk7zGhpZx4E8ByZGO0CJIMiWl3rZDbDI89Vi/r3v6pZ2P1nI19uFCFtXVw2LfNh1hwvBJqspjQSDDdgrT/2Z+kBOiu8lttBZmmgMxRQLIgbJ9QQrquCzTgT+smgh+t6uiu3rMNaLVSGXRNdI1KgskNUd+eDc5985pt/ofP4ipoREmAUHHxsUD2NI68/csbsHxYmbtLciAShShUSpyCZpJoa6OekZawffwyZ5dBbGqLOpUQMHKV2BjvcMQqiu0n03lMG/iJPMFxTfoAnEYkt09xXvnGjw+jzSprMSpF2nDxw7Miw3X2r40rpT1E0mG4RhhQecSTLdUzUoCOfJCdCdmgm1WKcmM41BDZywBAogIG5bh7AcSGPvyPae2+3A/bylG6udFN417Q4WbZeJlQCGsfJEKQwVxuMOhm5UtkPj4h9yPbe+Cp9VGtqTU=) 2025-03-27 00:27:50.807857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE8rrymHzOgRGSTyUvEYyqInoQvWivPUwUIdJzorjSb9IRKi95RMqseN9BsalTtG9db/xp1ZzKPlgt8nS8y2d6E=) 2025-03-27 00:27:50.809352 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBO8cgg/YluVAFbQpGv0b+V1YnrgeIqFqOv2LbWcnx7M) 2025-03-27 00:27:50.810704 | orchestrator | 2025-03-27 00:27:50.811464 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-27 00:27:50.812180 | orchestrator | Thursday 27 March 2025 00:27:50 +0000 (0:00:01.148) 0:00:27.380 ******** 2025-03-27 00:27:51.960669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYHZRvVuXGoUD28ItYWuv/W9SB+Ez2frfAc47b0mOT1bTQMJfy1YmOPaVJsLHmnabR/b9Ik1xynoDjeh5RH0lhdntittJkxlZTSsotq4CQeHmjago0Ke5xwu8FpLxlK8+KM0ZnewgjlQTzVf0vqF6V1wah5PzHy+0eyDeCJqfdYsX+hBHqlfT223c0e66uJmKeFBa88nVtBa5KF9mtNqh9SMRK42anWPrq80mPt6PuuvIKMRIBYYo6s5NIexnNzMabLDi7+th5CGzbK/bGn8MPJRHWQvy2dUWCRFFkJPxaLqA0nCCk7b08hmDWdvt0yrCzeW5DbPEnhg2uEEmvDQKRNpHBGXoq02FNmCYW/6GM5rIM50JpzdrpRDkZ3smsaWY1eSrfL4OyzicNEVpdovte6NkZ0o5VcWrBke71RcL/0H8e4vxnuHle/Wl2G+KdiEvW2XqWl38TyrohPYy+am8IaumdH/3OaMWXdlP4f6kXdJT/dnAfHQ4ri3SSDR9Why8=) 2025-03-27 00:27:51.961092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEvIk31CvukGLHXaezHwYaD+C5dx2BO1FkZnHqPGi/IOMXYZgYkZadhUj0y0g+CUqgqIIv8NufPTV0+gY5eB8uI=) 2025-03-27 00:27:51.961128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCL+/BVOXTDA4CgL0PC1d9di+Oxha1bZeQ4KlMgw+dZ) 2025-03-27 00:27:51.961145 | orchestrator | 2025-03-27 00:27:51.961160 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-27 00:27:51.961181 | orchestrator | Thursday 27 March 2025 00:27:51 +0000 (0:00:01.149) 0:00:28.530 ******** 2025-03-27 00:27:52.150092 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-27 00:27:52.150319 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-27 00:27:52.150800 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-27 00:27:52.151814 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-27 00:27:52.152272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-27 00:27:52.153953 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-27 00:27:52.154603 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-27 00:27:52.154923 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:52.155626 | orchestrator | 2025-03-27 00:27:52.156148 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-27 00:27:52.156402 | orchestrator | Thursday 27 March 2025 00:27:52 +0000 (0:00:00.193) 0:00:28.723 ******** 2025-03-27 00:27:52.204989 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:52.205855 | orchestrator | 2025-03-27 00:27:52.205891 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-27 00:27:52.207457 | orchestrator | Thursday 27 March 2025 00:27:52 +0000 (0:00:00.055) 0:00:28.779 ******** 2025-03-27 00:27:52.279397 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:27:52.280361 | orchestrator | 2025-03-27 00:27:52.280393 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-27 00:27:52.280972 | orchestrator | Thursday 27 March 2025 00:27:52 +0000 (0:00:00.071) 0:00:28.851 ******** 2025-03-27 00:27:53.063606 | orchestrator | changed: [testbed-manager] 2025-03-27 00:27:53.064206 | orchestrator | 2025-03-27 00:27:53.064276 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:27:53.064802 | orchestrator | 2025-03-27 00:27:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:27:53.065396 | orchestrator | 2025-03-27 00:27:53 | INFO  | Please wait and do not abort execution. 2025-03-27 00:27:53.065427 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:27:53.066508 | orchestrator | 2025-03-27 00:27:53.067702 | orchestrator | Thursday 27 March 2025 00:27:53 +0000 (0:00:00.783) 0:00:29.634 ******** 2025-03-27 00:27:53.068116 | orchestrator | =============================================================================== 2025-03-27 00:27:53.069327 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.28s 2025-03-27 00:27:53.069756 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.58s 2025-03-27 00:27:53.070339 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2025-03-27 00:27:53.070948 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-27 00:27:53.071462 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-27 00:27:53.074219 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-27 00:27:53.074895 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-27 00:27:53.074924 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-27 00:27:53.075628 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-27 00:27:53.075657 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-27 00:27:53.075955 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-27 00:27:53.076544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-03-27 00:27:53.077020 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-03-27 00:27:53.077248 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-03-27 00:27:53.077747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-03-27 00:27:53.078120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-03-27 00:27:53.078249 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.78s 2025-03-27 00:27:53.079437 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-03-27 00:27:53.082529 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-03-27 00:27:53.082577 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-03-27 00:27:53.504858 | orchestrator | + osism apply squid 2025-03-27 00:27:55.015686 | orchestrator | 2025-03-27 00:27:55 | INFO  | Task b34dee71-c978-4dbd-91bb-df6862884763 (squid) was prepared for execution. 2025-03-27 00:27:58.417978 | orchestrator | 2025-03-27 00:27:55 | INFO  | It takes a moment until task b34dee71-c978-4dbd-91bb-df6862884763 (squid) has been started and output is visible here. 2025-03-27 00:27:58.418301 | orchestrator | 2025-03-27 00:27:58.418435 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-27 00:27:58.418881 | orchestrator | 2025-03-27 00:27:58.419007 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-27 00:27:58.419443 | orchestrator | Thursday 27 March 2025 00:27:58 +0000 (0:00:00.161) 0:00:00.161 ******** 2025-03-27 00:27:58.518428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-27 00:27:58.519159 | orchestrator | 2025-03-27 00:27:58.519538 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-27 00:28:00.053169 | orchestrator | Thursday 27 March 2025 00:27:58 +0000 (0:00:00.101) 0:00:00.262 ******** 2025-03-27 00:28:00.053320 | orchestrator | ok: [testbed-manager] 2025-03-27 00:28:00.053398 | orchestrator | 2025-03-27 00:28:00.054079 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-27 00:28:00.054453 | orchestrator | Thursday 27 March 2025 00:28:00 +0000 (0:00:01.535) 0:00:01.797 ******** 2025-03-27 00:28:01.229063 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-27 00:28:01.229558 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-27 00:28:01.230292 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-27 00:28:01.230541 | orchestrator | 2025-03-27 00:28:01.231299 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-27 00:28:01.231329 | orchestrator | Thursday 27 March 2025 00:28:01 +0000 (0:00:01.174) 0:00:02.972 ******** 2025-03-27 00:28:02.319489 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-27 00:28:02.319930 | orchestrator | 2025-03-27 00:28:02.321634 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-27 00:28:02.322981 | orchestrator | Thursday 27 March 2025 00:28:02 +0000 (0:00:01.091) 0:00:04.063 ******** 2025-03-27 00:28:02.703470 | orchestrator | ok: [testbed-manager] 2025-03-27 00:28:02.704026 | orchestrator | 2025-03-27 00:28:02.704056 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-27 00:28:02.704077 | orchestrator | Thursday 27 March 2025 00:28:02 +0000 (0:00:00.383) 0:00:04.446 ******** 2025-03-27 00:28:03.719364 | orchestrator | changed: [testbed-manager] 2025-03-27 00:28:03.720270 | orchestrator | 2025-03-27 00:28:03.720771 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-27 00:28:03.720961 | orchestrator | Thursday 27 March 2025 00:28:03 +0000 (0:00:01.015) 0:00:05.462 ******** 2025-03-27 00:28:37.139465 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-27 00:28:49.927538 | orchestrator | ok: [testbed-manager] 2025-03-27 00:28:49.927671 | orchestrator | 2025-03-27 00:28:49.927692 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-27 00:28:49.927708 | orchestrator | Thursday 27 March 2025 00:28:37 +0000 (0:00:33.416) 0:00:38.878 ******** 2025-03-27 00:28:49.927740 | orchestrator | changed: [testbed-manager] 2025-03-27 00:28:49.928333 | orchestrator | 2025-03-27 00:28:49.928372 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-27 00:28:49.929306 | orchestrator | Thursday 27 March 2025 00:28:49 +0000 (0:00:12.789) 0:00:51.668 ******** 2025-03-27 00:29:50.029601 | orchestrator | Pausing for 60 seconds 2025-03-27 00:29:50.030327 | orchestrator | changed: [testbed-manager] 2025-03-27 00:29:50.030367 | orchestrator | 2025-03-27 00:29:50.030382 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-27 00:29:50.030403 | orchestrator | Thursday 27 March 2025 00:29:50 +0000 (0:01:00.101) 0:01:51.769 ******** 2025-03-27 00:29:50.121246 | orchestrator | ok: [testbed-manager] 2025-03-27 00:29:50.121634 | orchestrator | 2025-03-27 00:29:50.121664 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-27 00:29:50.121685 | orchestrator | Thursday 27 March 2025 00:29:50 +0000 (0:00:00.095) 0:01:51.864 ******** 2025-03-27 00:29:50.833681 | orchestrator | changed: [testbed-manager] 2025-03-27 00:29:50.833871 | orchestrator | 2025-03-27 00:29:50.833899 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:29:50.834890 | orchestrator | 2025-03-27 00:29:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:29:50.836073 | orchestrator | 2025-03-27 00:29:50 | INFO  | Please wait and do not abort execution. 2025-03-27 00:29:50.836104 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:29:50.837380 | orchestrator | 2025-03-27 00:29:50.837590 | orchestrator | Thursday 27 March 2025 00:29:50 +0000 (0:00:00.712) 0:01:52.577 ******** 2025-03-27 00:29:50.838697 | orchestrator | =============================================================================== 2025-03-27 00:29:50.839130 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2025-03-27 00:29:50.840139 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.42s 2025-03-27 00:29:50.841512 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.79s 2025-03-27 00:29:50.842649 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.54s 2025-03-27 00:29:50.843476 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-03-27 00:29:50.843583 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-03-27 00:29:50.844603 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.02s 2025-03-27 00:29:50.845185 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.71s 2025-03-27 00:29:50.845896 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-03-27 00:29:50.846340 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-03-27 00:29:50.846988 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.10s 2025-03-27 00:29:51.294950 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-27 00:29:51.301310 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-03-27 00:29:51.301375 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-27 00:29:51.358182 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-27 00:29:51.365118 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-27 00:29:51.365175 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-03-27 00:29:51.365202 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-27 00:29:51.370384 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-27 00:29:51.376093 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-27 00:29:52.893838 | orchestrator | 2025-03-27 00:29:52 | INFO  | Task d53a98f5-a500-4a5a-9de0-717e31887cb2 (operator) was prepared for execution. 2025-03-27 00:29:56.151289 | orchestrator | 2025-03-27 00:29:52 | INFO  | It takes a moment until task d53a98f5-a500-4a5a-9de0-717e31887cb2 (operator) has been started and output is visible here. 2025-03-27 00:29:56.151447 | orchestrator | 2025-03-27 00:29:56.154270 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-27 00:29:56.154311 | orchestrator | 2025-03-27 00:29:56.159719 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-27 00:29:56.164896 | orchestrator | Thursday 27 March 2025 00:29:56 +0000 (0:00:00.103) 0:00:00.103 ******** 2025-03-27 00:29:59.979059 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:29:59.979504 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:29:59.980262 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:29:59.984422 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:29:59.985095 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:29:59.985522 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:29:59.986069 | orchestrator | 2025-03-27 00:29:59.988118 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-27 00:30:00.873343 | orchestrator | Thursday 27 March 2025 00:29:59 +0000 (0:00:03.828) 0:00:03.932 ******** 2025-03-27 00:30:00.873511 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:30:00.873585 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:30:00.875095 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:00.875418 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:30:00.877289 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:00.877565 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:00.882206 | orchestrator | 2025-03-27 00:30:00.884299 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-27 00:30:00.884970 | orchestrator | 2025-03-27 00:30:00.886745 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-27 00:30:00.967564 | orchestrator | Thursday 27 March 2025 00:30:00 +0000 (0:00:00.895) 0:00:04.827 ******** 2025-03-27 00:30:00.967673 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:30:00.995764 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:30:01.039044 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:30:01.108707 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:01.111298 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:01.112332 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:01.115836 | orchestrator | 2025-03-27 00:30:01.189499 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-27 00:30:01.189592 | orchestrator | Thursday 27 March 2025 00:30:01 +0000 (0:00:00.236) 0:00:05.064 ******** 2025-03-27 00:30:01.189626 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:30:01.212394 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:30:01.236092 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:30:01.286566 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:01.286705 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:01.287290 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:01.288138 | orchestrator | 2025-03-27 00:30:01.293012 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-27 00:30:01.294279 | orchestrator | Thursday 27 March 2025 00:30:01 +0000 (0:00:00.177) 0:00:05.242 ******** 2025-03-27 00:30:02.078252 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:02.078468 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:02.078494 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:02.078515 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:02.080865 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:02.081125 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:02.081501 | orchestrator | 2025-03-27 00:30:02.081703 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-27 00:30:02.082003 | orchestrator | Thursday 27 March 2025 00:30:02 +0000 (0:00:00.788) 0:00:06.031 ******** 2025-03-27 00:30:03.082798 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:03.082969 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:03.082990 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:03.083009 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:03.084270 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:03.085172 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:03.086349 | orchestrator | 2025-03-27 00:30:03.086730 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-27 00:30:03.088674 | orchestrator | Thursday 27 March 2025 00:30:03 +0000 (0:00:01.004) 0:00:07.035 ******** 2025-03-27 00:30:04.549126 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-27 00:30:04.549627 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-27 00:30:04.549668 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-27 00:30:04.549906 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-27 00:30:04.551382 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-27 00:30:04.554108 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-27 00:30:04.555329 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-27 00:30:04.555359 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-27 00:30:04.556591 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-27 00:30:04.556915 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-27 00:30:04.557989 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-27 00:30:04.558344 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-27 00:30:04.559647 | orchestrator | 2025-03-27 00:30:04.559883 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-27 00:30:04.560823 | orchestrator | Thursday 27 March 2025 00:30:04 +0000 (0:00:01.470) 0:00:08.505 ******** 2025-03-27 00:30:05.965052 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:05.965550 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:05.965837 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:05.965849 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:05.968406 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:05.969379 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:05.972658 | orchestrator | 2025-03-27 00:30:05.973455 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-27 00:30:05.974954 | orchestrator | Thursday 27 March 2025 00:30:05 +0000 (0:00:01.412) 0:00:09.918 ******** 2025-03-27 00:30:07.220784 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-27 00:30:07.223612 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-27 00:30:07.223984 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-27 00:30:07.433566 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.436013 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.436063 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.436280 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.437251 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.437565 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-27 00:30:07.438271 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.438309 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.438585 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.439813 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.441008 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.441042 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-27 00:30:07.443037 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.443070 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.443090 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.443863 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.444273 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.444305 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-27 00:30:07.444628 | orchestrator | 2025-03-27 00:30:07.445179 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-27 00:30:07.445378 | orchestrator | Thursday 27 March 2025 00:30:07 +0000 (0:00:01.470) 0:00:11.389 ******** 2025-03-27 00:30:08.058885 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:08.059062 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:08.059665 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:08.060282 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:08.061202 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:08.061944 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:08.061983 | orchestrator | 2025-03-27 00:30:08.062529 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-27 00:30:08.062824 | orchestrator | Thursday 27 March 2025 00:30:08 +0000 (0:00:00.623) 0:00:12.012 ******** 2025-03-27 00:30:08.138009 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:30:08.162954 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:30:08.190162 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:30:08.250852 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:08.251754 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:08.251856 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:08.252806 | orchestrator | 2025-03-27 00:30:08.256263 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-27 00:30:08.256866 | orchestrator | Thursday 27 March 2025 00:30:08 +0000 (0:00:00.194) 0:00:12.206 ******** 2025-03-27 00:30:09.072147 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 00:30:09.072394 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:09.073048 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 00:30:09.073191 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:09.077532 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 00:30:09.078107 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:09.078307 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 00:30:09.079144 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:09.079457 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-27 00:30:09.079963 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:09.080691 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-27 00:30:09.083919 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:09.084714 | orchestrator | 2025-03-27 00:30:09.085461 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-27 00:30:09.085755 | orchestrator | Thursday 27 March 2025 00:30:09 +0000 (0:00:00.817) 0:00:13.024 ******** 2025-03-27 00:30:09.133930 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:30:09.165111 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:30:09.202383 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:30:09.225044 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:09.259884 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:09.261713 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:09.263709 | orchestrator | 2025-03-27 00:30:09.265086 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-27 00:30:09.265945 | orchestrator | Thursday 27 March 2025 00:30:09 +0000 (0:00:00.192) 0:00:13.216 ******** 2025-03-27 00:30:09.306858 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:30:09.335441 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:30:09.357609 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:30:09.404199 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:09.448201 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:09.452273 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:09.453575 | orchestrator | 2025-03-27 00:30:09.456915 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-27 00:30:09.459521 | orchestrator | Thursday 27 March 2025 00:30:09 +0000 (0:00:00.186) 0:00:13.402 ******** 2025-03-27 00:30:09.515910 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:30:09.537694 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:30:09.560146 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:30:09.602889 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:09.605913 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:09.606891 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:09.606928 | orchestrator | 2025-03-27 00:30:09.608417 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-27 00:30:09.609136 | orchestrator | Thursday 27 March 2025 00:30:09 +0000 (0:00:00.157) 0:00:13.560 ******** 2025-03-27 00:30:10.474207 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:10.474850 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:10.474889 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:10.475159 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:10.475790 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:10.476535 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:10.477480 | orchestrator | 2025-03-27 00:30:10.478192 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-27 00:30:10.480477 | orchestrator | Thursday 27 March 2025 00:30:10 +0000 (0:00:00.866) 0:00:14.426 ******** 2025-03-27 00:30:10.586703 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:30:10.610617 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:30:10.728569 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:30:10.729605 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:10.732950 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:10.733569 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:10.733957 | orchestrator | 2025-03-27 00:30:10.735199 | orchestrator | 2025-03-27 00:30:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:30:10.735350 | orchestrator | 2025-03-27 00:30:10 | INFO  | Please wait and do not abort execution. 2025-03-27 00:30:10.735377 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:30:10.735763 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.736562 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.737418 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.738214 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.738476 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.739338 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:30:10.739818 | orchestrator | 2025-03-27 00:30:10.740366 | orchestrator | Thursday 27 March 2025 00:30:10 +0000 (0:00:00.259) 0:00:14.686 ******** 2025-03-27 00:30:10.740794 | orchestrator | =============================================================================== 2025-03-27 00:30:10.741173 | orchestrator | Gathering Facts --------------------------------------------------------- 3.83s 2025-03-27 00:30:10.741565 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.47s 2025-03-27 00:30:10.742397 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.47s 2025-03-27 00:30:10.742948 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.41s 2025-03-27 00:30:10.743641 | orchestrator | osism.commons.operator : Create user ------------------------------------ 1.00s 2025-03-27 00:30:10.744119 | orchestrator | Do not require tty for all users ---------------------------------------- 0.90s 2025-03-27 00:30:10.744807 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.87s 2025-03-27 00:30:10.745778 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.82s 2025-03-27 00:30:10.746085 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.79s 2025-03-27 00:30:10.746493 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-03-27 00:30:10.747023 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-03-27 00:30:10.747355 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.24s 2025-03-27 00:30:10.747599 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-03-27 00:30:10.747963 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-03-27 00:30:10.748324 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-03-27 00:30:10.748728 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-03-27 00:30:10.750195 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-03-27 00:30:11.221437 | orchestrator | + osism apply --environment custom facts 2025-03-27 00:30:12.683912 | orchestrator | 2025-03-27 00:30:12 | INFO  | Trying to run play facts in environment custom 2025-03-27 00:30:12.739665 | orchestrator | 2025-03-27 00:30:12 | INFO  | Task 5864bae5-a815-47a9-aa19-ec45f3c0eb69 (facts) was prepared for execution. 2025-03-27 00:30:16.055449 | orchestrator | 2025-03-27 00:30:12 | INFO  | It takes a moment until task 5864bae5-a815-47a9-aa19-ec45f3c0eb69 (facts) has been started and output is visible here. 2025-03-27 00:30:16.055603 | orchestrator | 2025-03-27 00:30:16.057024 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-27 00:30:16.057064 | orchestrator | 2025-03-27 00:30:16.058213 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-27 00:30:16.058765 | orchestrator | Thursday 27 March 2025 00:30:16 +0000 (0:00:00.085) 0:00:00.085 ******** 2025-03-27 00:30:17.415653 | orchestrator | ok: [testbed-manager] 2025-03-27 00:30:18.573342 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:18.574644 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:18.578132 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:18.581351 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:18.582270 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:18.582610 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:18.583308 | orchestrator | 2025-03-27 00:30:18.583911 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-27 00:30:18.584381 | orchestrator | Thursday 27 March 2025 00:30:18 +0000 (0:00:02.519) 0:00:02.605 ******** 2025-03-27 00:30:19.847701 | orchestrator | ok: [testbed-manager] 2025-03-27 00:30:20.846370 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:20.846505 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:30:20.852932 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:20.854538 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:20.855311 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:30:20.856336 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:30:20.857107 | orchestrator | 2025-03-27 00:30:20.858389 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-27 00:30:20.860404 | orchestrator | 2025-03-27 00:30:20.861158 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-27 00:30:20.862005 | orchestrator | Thursday 27 March 2025 00:30:20 +0000 (0:00:02.270) 0:00:04.875 ******** 2025-03-27 00:30:20.949066 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:20.951350 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:20.952800 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:20.954157 | orchestrator | 2025-03-27 00:30:20.956266 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-27 00:30:20.957126 | orchestrator | Thursday 27 March 2025 00:30:20 +0000 (0:00:00.106) 0:00:04.981 ******** 2025-03-27 00:30:21.088449 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:21.089487 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:21.090154 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:21.093680 | orchestrator | 2025-03-27 00:30:21.094122 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-27 00:30:21.094412 | orchestrator | Thursday 27 March 2025 00:30:21 +0000 (0:00:00.139) 0:00:05.121 ******** 2025-03-27 00:30:21.238204 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:21.241941 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:21.241991 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:21.378334 | orchestrator | 2025-03-27 00:30:21.378399 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-27 00:30:21.378417 | orchestrator | Thursday 27 March 2025 00:30:21 +0000 (0:00:00.146) 0:00:05.267 ******** 2025-03-27 00:30:21.378476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:30:21.378583 | orchestrator | 2025-03-27 00:30:21.378615 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-27 00:30:21.378643 | orchestrator | Thursday 27 March 2025 00:30:21 +0000 (0:00:00.142) 0:00:05.410 ******** 2025-03-27 00:30:21.865755 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:21.866323 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:21.867332 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:21.867481 | orchestrator | 2025-03-27 00:30:21.869504 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-27 00:30:21.870237 | orchestrator | Thursday 27 March 2025 00:30:21 +0000 (0:00:00.488) 0:00:05.898 ******** 2025-03-27 00:30:21.977358 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:21.981001 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:21.981529 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:21.981662 | orchestrator | 2025-03-27 00:30:21.982591 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-27 00:30:21.983356 | orchestrator | Thursday 27 March 2025 00:30:21 +0000 (0:00:00.112) 0:00:06.011 ******** 2025-03-27 00:30:23.198132 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:23.200160 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:23.200482 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:23.204681 | orchestrator | 2025-03-27 00:30:23.205022 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-27 00:30:23.685564 | orchestrator | Thursday 27 March 2025 00:30:23 +0000 (0:00:01.218) 0:00:07.229 ******** 2025-03-27 00:30:23.685649 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:23.687351 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:23.687499 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:23.687867 | orchestrator | 2025-03-27 00:30:23.688377 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-27 00:30:23.688597 | orchestrator | Thursday 27 March 2025 00:30:23 +0000 (0:00:00.489) 0:00:07.718 ******** 2025-03-27 00:30:24.815263 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:24.815881 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:24.816986 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:24.817412 | orchestrator | 2025-03-27 00:30:24.818427 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-27 00:30:24.819328 | orchestrator | Thursday 27 March 2025 00:30:24 +0000 (0:00:01.122) 0:00:08.840 ******** 2025-03-27 00:30:39.076050 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:39.076245 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:39.076270 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:39.076290 | orchestrator | 2025-03-27 00:30:39.077202 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-27 00:30:39.078468 | orchestrator | Thursday 27 March 2025 00:30:39 +0000 (0:00:14.260) 0:00:23.101 ******** 2025-03-27 00:30:39.124123 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:30:39.176608 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:30:39.178340 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:30:39.183014 | orchestrator | 2025-03-27 00:30:39.184321 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-27 00:30:39.185255 | orchestrator | Thursday 27 March 2025 00:30:39 +0000 (0:00:00.106) 0:00:23.207 ******** 2025-03-27 00:30:47.850622 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:30:47.850798 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:30:47.850822 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:30:47.850844 | orchestrator | 2025-03-27 00:30:47.851462 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-27 00:30:47.852329 | orchestrator | Thursday 27 March 2025 00:30:47 +0000 (0:00:08.672) 0:00:31.880 ******** 2025-03-27 00:30:48.328971 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:48.329103 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:48.329622 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:48.330830 | orchestrator | 2025-03-27 00:30:48.331076 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-27 00:30:48.331962 | orchestrator | Thursday 27 March 2025 00:30:48 +0000 (0:00:00.481) 0:00:32.361 ******** 2025-03-27 00:30:52.183040 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-27 00:30:52.183290 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-27 00:30:52.184699 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-27 00:30:52.184757 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-27 00:30:52.186070 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-27 00:30:52.187432 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-27 00:30:52.187461 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-27 00:30:52.187671 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-27 00:30:52.188411 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-27 00:30:52.188477 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-27 00:30:52.190003 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-27 00:30:52.190099 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-27 00:30:52.190906 | orchestrator | 2025-03-27 00:30:52.191093 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-27 00:30:52.192570 | orchestrator | Thursday 27 March 2025 00:30:52 +0000 (0:00:03.852) 0:00:36.214 ******** 2025-03-27 00:30:53.568696 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:53.569678 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:53.569865 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:53.571982 | orchestrator | 2025-03-27 00:30:53.574588 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-27 00:30:53.574934 | orchestrator | 2025-03-27 00:30:53.574962 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:30:53.576258 | orchestrator | Thursday 27 March 2025 00:30:53 +0000 (0:00:01.384) 0:00:37.599 ******** 2025-03-27 00:30:55.374995 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:30:58.970335 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:30:58.971577 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:30:58.975054 | orchestrator | ok: [testbed-manager] 2025-03-27 00:30:58.975099 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:30:58.975129 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:30:58.975146 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:30:58.975160 | orchestrator | 2025-03-27 00:30:58.975176 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:30:58.975197 | orchestrator | 2025-03-27 00:30:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:30:58.976389 | orchestrator | 2025-03-27 00:30:58 | INFO  | Please wait and do not abort execution. 2025-03-27 00:30:58.976426 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:30:58.976895 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:30:58.978095 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:30:58.978966 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:30:58.979003 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:30:58.980061 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:30:58.980315 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:30:58.981619 | orchestrator | 2025-03-27 00:30:58.987037 | orchestrator | Thursday 27 March 2025 00:30:58 +0000 (0:00:05.401) 0:00:43.000 ******** 2025-03-27 00:30:58.987490 | orchestrator | =============================================================================== 2025-03-27 00:30:58.987543 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.26s 2025-03-27 00:30:58.987559 | orchestrator | Install required packages (Debian) -------------------------------------- 8.67s 2025-03-27 00:30:58.987573 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.40s 2025-03-27 00:30:58.987587 | orchestrator | Copy fact files --------------------------------------------------------- 3.85s 2025-03-27 00:30:58.987602 | orchestrator | Create custom facts directory ------------------------------------------- 2.52s 2025-03-27 00:30:58.987616 | orchestrator | Copy fact file ---------------------------------------------------------- 2.27s 2025-03-27 00:30:58.987635 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.38s 2025-03-27 00:30:58.989692 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.22s 2025-03-27 00:30:58.993778 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2025-03-27 00:30:58.993813 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-03-27 00:30:58.997626 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-03-27 00:30:58.998236 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2025-03-27 00:30:58.998262 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-03-27 00:30:58.998277 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-03-27 00:30:58.998292 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.14s 2025-03-27 00:30:58.998306 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-03-27 00:30:58.998321 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-03-27 00:30:58.998340 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-03-27 00:30:59.503839 | orchestrator | + osism apply bootstrap 2025-03-27 00:31:01.067771 | orchestrator | 2025-03-27 00:31:01 | INFO  | Task ef708c79-5361-434e-b822-abb54ff8b198 (bootstrap) was prepared for execution. 2025-03-27 00:31:04.497788 | orchestrator | 2025-03-27 00:31:01 | INFO  | It takes a moment until task ef708c79-5361-434e-b822-abb54ff8b198 (bootstrap) has been started and output is visible here. 2025-03-27 00:31:04.497929 | orchestrator | 2025-03-27 00:31:04.500174 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-27 00:31:04.501086 | orchestrator | 2025-03-27 00:31:04.501868 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-27 00:31:04.504273 | orchestrator | Thursday 27 March 2025 00:31:04 +0000 (0:00:00.120) 0:00:00.120 ******** 2025-03-27 00:31:04.609125 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:04.630137 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:04.665171 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:04.777517 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:04.778723 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:04.779937 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:04.781320 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:04.782273 | orchestrator | 2025-03-27 00:31:04.783616 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-27 00:31:04.784369 | orchestrator | 2025-03-27 00:31:04.785444 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:31:04.786272 | orchestrator | Thursday 27 March 2025 00:31:04 +0000 (0:00:00.286) 0:00:00.406 ******** 2025-03-27 00:31:09.202126 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:09.202340 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:09.203238 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:09.204166 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:09.204504 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:09.205363 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:09.206821 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:09.207103 | orchestrator | 2025-03-27 00:31:09.207727 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-27 00:31:09.207779 | orchestrator | 2025-03-27 00:31:09.208249 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:31:09.208540 | orchestrator | Thursday 27 March 2025 00:31:09 +0000 (0:00:04.425) 0:00:04.831 ******** 2025-03-27 00:31:09.292734 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-27 00:31:09.331552 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-27 00:31:09.331602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-27 00:31:09.334994 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-27 00:31:09.335368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 00:31:09.335889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-27 00:31:09.336331 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-27 00:31:09.385157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 00:31:09.385591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 00:31:09.385891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-27 00:31:09.385916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 00:31:09.386243 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 00:31:09.386569 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-27 00:31:09.387005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 00:31:09.435057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-27 00:31:09.435300 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-27 00:31:09.435787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 00:31:09.435994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 00:31:09.437951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 00:31:09.699886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 00:31:09.699974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 00:31:09.704079 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:09.706327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 00:31:09.706818 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:09.708550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 00:31:09.709646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-27 00:31:09.713068 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-27 00:31:09.713575 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 00:31:09.714475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 00:31:09.715300 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:09.715933 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 00:31:09.716839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 00:31:09.717514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 00:31:09.718406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 00:31:09.719030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 00:31:09.719853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 00:31:09.720265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 00:31:09.721232 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-27 00:31:09.722062 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 00:31:09.722377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 00:31:09.723166 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:09.723757 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 00:31:09.724476 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-27 00:31:09.724988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 00:31:09.725364 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-27 00:31:09.726178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 00:31:09.727903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 00:31:09.728857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-27 00:31:09.730540 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:09.731234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 00:31:09.732065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 00:31:09.733486 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:09.733700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-27 00:31:09.734559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-27 00:31:09.737455 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-27 00:31:09.737803 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:09.738586 | orchestrator | 2025-03-27 00:31:09.739306 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-27 00:31:09.739656 | orchestrator | 2025-03-27 00:31:09.740408 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-27 00:31:09.740819 | orchestrator | Thursday 27 March 2025 00:31:09 +0000 (0:00:00.497) 0:00:05.329 ******** 2025-03-27 00:31:09.781966 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:09.823494 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:09.853541 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:09.892246 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:09.956297 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:09.956752 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:09.957540 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:09.958917 | orchestrator | 2025-03-27 00:31:09.960942 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-27 00:31:11.280904 | orchestrator | Thursday 27 March 2025 00:31:09 +0000 (0:00:00.255) 0:00:05.585 ******** 2025-03-27 00:31:11.281031 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:11.282155 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:11.283982 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:11.284592 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:11.284621 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:11.285825 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:11.286624 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:11.287254 | orchestrator | 2025-03-27 00:31:11.288104 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-27 00:31:11.289064 | orchestrator | Thursday 27 March 2025 00:31:11 +0000 (0:00:01.324) 0:00:06.910 ******** 2025-03-27 00:31:12.744735 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:12.746702 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:12.748293 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:12.748330 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:12.749698 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:12.750199 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:12.751463 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:12.753998 | orchestrator | 2025-03-27 00:31:12.754181 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-27 00:31:12.754982 | orchestrator | Thursday 27 March 2025 00:31:12 +0000 (0:00:01.462) 0:00:08.372 ******** 2025-03-27 00:31:13.099144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:13.101450 | orchestrator | 2025-03-27 00:31:13.101567 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-27 00:31:15.205539 | orchestrator | Thursday 27 March 2025 00:31:13 +0000 (0:00:00.353) 0:00:08.726 ******** 2025-03-27 00:31:15.205669 | orchestrator | changed: [testbed-manager] 2025-03-27 00:31:15.206667 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:15.208701 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:15.209537 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:15.210596 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:15.211738 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:15.214406 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:15.215569 | orchestrator | 2025-03-27 00:31:15.216998 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-27 00:31:15.217958 | orchestrator | Thursday 27 March 2025 00:31:15 +0000 (0:00:02.105) 0:00:10.831 ******** 2025-03-27 00:31:15.299455 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:15.498260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:15.498824 | orchestrator | 2025-03-27 00:31:15.499017 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-27 00:31:15.499484 | orchestrator | Thursday 27 March 2025 00:31:15 +0000 (0:00:00.295) 0:00:11.127 ******** 2025-03-27 00:31:16.642427 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:16.646297 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:16.646368 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:16.646392 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:16.647555 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:16.647961 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:16.649044 | orchestrator | 2025-03-27 00:31:16.649139 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-27 00:31:16.649989 | orchestrator | Thursday 27 March 2025 00:31:16 +0000 (0:00:01.142) 0:00:12.270 ******** 2025-03-27 00:31:16.740167 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:17.291984 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:17.292163 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:17.293256 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:17.294462 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:17.294935 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:17.296027 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:17.297173 | orchestrator | 2025-03-27 00:31:17.297522 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-27 00:31:17.298096 | orchestrator | Thursday 27 March 2025 00:31:17 +0000 (0:00:00.649) 0:00:12.919 ******** 2025-03-27 00:31:17.397414 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:17.425033 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:17.449201 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:17.742767 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:17.743562 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:17.744978 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:17.745933 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:17.747067 | orchestrator | 2025-03-27 00:31:17.748744 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-27 00:31:17.749156 | orchestrator | Thursday 27 March 2025 00:31:17 +0000 (0:00:00.450) 0:00:13.369 ******** 2025-03-27 00:31:17.817889 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:17.843331 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:17.875353 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:17.899089 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:17.984348 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:17.984512 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:17.985811 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:17.986873 | orchestrator | 2025-03-27 00:31:17.987788 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-27 00:31:17.988193 | orchestrator | Thursday 27 March 2025 00:31:17 +0000 (0:00:00.243) 0:00:13.612 ******** 2025-03-27 00:31:18.301015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:18.301318 | orchestrator | 2025-03-27 00:31:18.302179 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-27 00:31:18.302664 | orchestrator | Thursday 27 March 2025 00:31:18 +0000 (0:00:00.315) 0:00:13.928 ******** 2025-03-27 00:31:18.644584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:18.645673 | orchestrator | 2025-03-27 00:31:18.645711 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-27 00:31:18.646228 | orchestrator | Thursday 27 March 2025 00:31:18 +0000 (0:00:00.344) 0:00:14.273 ******** 2025-03-27 00:31:20.015708 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:20.017559 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:20.018651 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:20.020908 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:20.021946 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:20.022762 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:20.023292 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:20.023808 | orchestrator | 2025-03-27 00:31:20.024591 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-27 00:31:20.024995 | orchestrator | Thursday 27 March 2025 00:31:20 +0000 (0:00:01.368) 0:00:15.642 ******** 2025-03-27 00:31:20.099331 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:20.129273 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:20.157948 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:20.180960 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:20.257845 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:20.262403 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:20.262571 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:20.842170 | orchestrator | 2025-03-27 00:31:20.842320 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-27 00:31:20.842340 | orchestrator | Thursday 27 March 2025 00:31:20 +0000 (0:00:00.244) 0:00:15.886 ******** 2025-03-27 00:31:20.842369 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:20.842988 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:20.846086 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:20.847439 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:20.847465 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:20.847485 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:20.848333 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:20.849580 | orchestrator | 2025-03-27 00:31:20.850677 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-27 00:31:20.852163 | orchestrator | Thursday 27 March 2025 00:31:20 +0000 (0:00:00.583) 0:00:16.470 ******** 2025-03-27 00:31:20.925370 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:20.960442 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:20.995493 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:21.026693 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:21.112834 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:21.113903 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:21.114881 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:21.115783 | orchestrator | 2025-03-27 00:31:21.117120 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-27 00:31:21.118251 | orchestrator | Thursday 27 March 2025 00:31:21 +0000 (0:00:00.271) 0:00:16.741 ******** 2025-03-27 00:31:21.724138 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:21.724617 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:21.724682 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:21.727669 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:21.728280 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:21.728825 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:21.729501 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:21.730053 | orchestrator | 2025-03-27 00:31:21.730647 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-27 00:31:21.731165 | orchestrator | Thursday 27 March 2025 00:31:21 +0000 (0:00:00.609) 0:00:17.351 ******** 2025-03-27 00:31:23.008963 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:23.009896 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:23.009936 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:23.011134 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:23.011940 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:23.012647 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:23.013690 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:23.014116 | orchestrator | 2025-03-27 00:31:23.015031 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-27 00:31:23.015463 | orchestrator | Thursday 27 March 2025 00:31:22 +0000 (0:00:01.282) 0:00:18.633 ******** 2025-03-27 00:31:24.450414 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:24.450558 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:24.450582 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:24.450799 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:24.451431 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:24.455408 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:24.839500 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:24.839614 | orchestrator | 2025-03-27 00:31:24.839633 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-27 00:31:24.839649 | orchestrator | Thursday 27 March 2025 00:31:24 +0000 (0:00:01.443) 0:00:20.077 ******** 2025-03-27 00:31:24.839698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:24.840050 | orchestrator | 2025-03-27 00:31:24.841563 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-27 00:31:24.842498 | orchestrator | Thursday 27 March 2025 00:31:24 +0000 (0:00:00.387) 0:00:20.464 ******** 2025-03-27 00:31:24.923062 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:26.556720 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:26.556927 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:26.558135 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:26.558377 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:26.558959 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:26.559761 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:26.561750 | orchestrator | 2025-03-27 00:31:26.562166 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-27 00:31:26.562693 | orchestrator | Thursday 27 March 2025 00:31:26 +0000 (0:00:01.718) 0:00:22.182 ******** 2025-03-27 00:31:26.648956 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:26.680375 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:26.715427 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:26.736666 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:26.806395 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:26.809563 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:26.809593 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:26.809996 | orchestrator | 2025-03-27 00:31:26.810282 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-27 00:31:26.810760 | orchestrator | Thursday 27 March 2025 00:31:26 +0000 (0:00:00.251) 0:00:22.434 ******** 2025-03-27 00:31:26.889724 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:26.923642 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:26.954165 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:26.984675 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:27.088838 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:27.092570 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:27.093397 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:27.093428 | orchestrator | 2025-03-27 00:31:27.096046 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-27 00:31:27.096321 | orchestrator | Thursday 27 March 2025 00:31:27 +0000 (0:00:00.283) 0:00:22.717 ******** 2025-03-27 00:31:27.231707 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:27.260893 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:27.295318 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:27.318260 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:27.403709 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:27.405179 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:27.405390 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:27.405842 | orchestrator | 2025-03-27 00:31:27.406881 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-27 00:31:27.407111 | orchestrator | Thursday 27 March 2025 00:31:27 +0000 (0:00:00.311) 0:00:23.029 ******** 2025-03-27 00:31:27.771645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:27.772364 | orchestrator | 2025-03-27 00:31:27.775062 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-27 00:31:28.345018 | orchestrator | Thursday 27 March 2025 00:31:27 +0000 (0:00:00.370) 0:00:23.399 ******** 2025-03-27 00:31:28.345124 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:28.345702 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:28.347446 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:28.348358 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:28.349119 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:28.349825 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:28.350576 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:28.351354 | orchestrator | 2025-03-27 00:31:28.351855 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-27 00:31:28.352429 | orchestrator | Thursday 27 March 2025 00:31:28 +0000 (0:00:00.574) 0:00:23.974 ******** 2025-03-27 00:31:28.427432 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:28.449961 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:28.485386 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:28.512467 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:28.589308 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:28.589866 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:28.589964 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:28.590389 | orchestrator | 2025-03-27 00:31:28.590999 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-27 00:31:28.591203 | orchestrator | Thursday 27 March 2025 00:31:28 +0000 (0:00:00.241) 0:00:24.216 ******** 2025-03-27 00:31:29.702264 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:29.703859 | orchestrator | changed: [testbed-manager] 2025-03-27 00:31:29.704520 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:29.705562 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:29.706559 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:29.707552 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:29.708630 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:29.710229 | orchestrator | 2025-03-27 00:31:29.711606 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-27 00:31:29.712627 | orchestrator | Thursday 27 March 2025 00:31:29 +0000 (0:00:01.111) 0:00:25.328 ******** 2025-03-27 00:31:30.309895 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:30.311103 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:30.311910 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:30.312351 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:30.312756 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:30.315358 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:30.315868 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:30.315893 | orchestrator | 2025-03-27 00:31:30.315915 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-27 00:31:30.317287 | orchestrator | Thursday 27 March 2025 00:31:30 +0000 (0:00:00.610) 0:00:25.938 ******** 2025-03-27 00:31:31.470547 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:31.470735 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:31.474125 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:31.474333 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:31.474352 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:31.474367 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:31.474765 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:31.475156 | orchestrator | 2025-03-27 00:31:31.475639 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-27 00:31:31.476049 | orchestrator | Thursday 27 March 2025 00:31:31 +0000 (0:00:01.158) 0:00:27.096 ******** 2025-03-27 00:31:45.728891 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:45.729495 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:45.729563 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:45.729606 | orchestrator | changed: [testbed-manager] 2025-03-27 00:31:45.730308 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:45.731331 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:45.732444 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:45.732748 | orchestrator | 2025-03-27 00:31:45.732779 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-27 00:31:45.734886 | orchestrator | Thursday 27 March 2025 00:31:45 +0000 (0:00:14.255) 0:00:41.352 ******** 2025-03-27 00:31:45.808600 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:45.838624 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:45.868572 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:45.897769 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:45.974963 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:45.975367 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:45.977292 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:45.978232 | orchestrator | 2025-03-27 00:31:45.978593 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-27 00:31:45.979794 | orchestrator | Thursday 27 March 2025 00:31:45 +0000 (0:00:00.250) 0:00:41.603 ******** 2025-03-27 00:31:46.054663 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:46.092671 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:46.123416 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:46.152265 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:46.221571 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:46.221931 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:46.222866 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:46.223881 | orchestrator | 2025-03-27 00:31:46.225797 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-27 00:31:46.306663 | orchestrator | Thursday 27 March 2025 00:31:46 +0000 (0:00:00.246) 0:00:41.849 ******** 2025-03-27 00:31:46.306698 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:46.346195 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:46.381646 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:46.406595 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:46.474944 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:46.475483 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:46.477166 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:46.478316 | orchestrator | 2025-03-27 00:31:46.479307 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-27 00:31:46.480000 | orchestrator | Thursday 27 March 2025 00:31:46 +0000 (0:00:00.254) 0:00:42.104 ******** 2025-03-27 00:31:46.842605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:46.843330 | orchestrator | 2025-03-27 00:31:46.843771 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-27 00:31:46.844492 | orchestrator | Thursday 27 March 2025 00:31:46 +0000 (0:00:00.364) 0:00:42.468 ******** 2025-03-27 00:31:48.551441 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:48.551635 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:48.551665 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:48.552144 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:48.552513 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:48.553290 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:48.553552 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:48.554288 | orchestrator | 2025-03-27 00:31:48.554440 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-27 00:31:48.554866 | orchestrator | Thursday 27 March 2025 00:31:48 +0000 (0:00:01.709) 0:00:44.178 ******** 2025-03-27 00:31:49.721771 | orchestrator | changed: [testbed-manager] 2025-03-27 00:31:49.722605 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:49.723930 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:49.724940 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:49.726092 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:49.727127 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:49.727650 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:49.728596 | orchestrator | 2025-03-27 00:31:49.729906 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-27 00:31:49.730420 | orchestrator | Thursday 27 March 2025 00:31:49 +0000 (0:00:01.170) 0:00:45.348 ******** 2025-03-27 00:31:50.597446 | orchestrator | ok: [testbed-manager] 2025-03-27 00:31:50.598506 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:31:50.600422 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:31:50.600824 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:31:50.602260 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:31:50.603920 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:31:50.605265 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:31:50.606081 | orchestrator | 2025-03-27 00:31:50.607017 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-27 00:31:50.607456 | orchestrator | Thursday 27 March 2025 00:31:50 +0000 (0:00:00.873) 0:00:46.222 ******** 2025-03-27 00:31:50.941987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:31:50.942856 | orchestrator | 2025-03-27 00:31:50.943531 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-27 00:31:50.945823 | orchestrator | Thursday 27 March 2025 00:31:50 +0000 (0:00:00.348) 0:00:46.571 ******** 2025-03-27 00:31:51.949270 | orchestrator | changed: [testbed-manager] 2025-03-27 00:31:51.950085 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:31:51.950903 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:31:51.953935 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:31:51.954362 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:31:51.954397 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:31:51.955421 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:31:51.957103 | orchestrator | 2025-03-27 00:31:51.957885 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-27 00:31:51.958526 | orchestrator | Thursday 27 March 2025 00:31:51 +0000 (0:00:01.005) 0:00:47.576 ******** 2025-03-27 00:31:52.029627 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:31:52.052642 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:31:52.073150 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:31:52.092614 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:31:52.224448 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:31:52.225069 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:31:52.225108 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:31:52.227577 | orchestrator | 2025-03-27 00:31:52.228116 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-27 00:31:52.228810 | orchestrator | Thursday 27 March 2025 00:31:52 +0000 (0:00:00.276) 0:00:47.853 ******** 2025-03-27 00:32:05.937725 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:32:05.939977 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:32:05.940028 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:32:05.941358 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:32:05.941411 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:32:05.941744 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:32:05.942613 | orchestrator | changed: [testbed-manager] 2025-03-27 00:32:05.943329 | orchestrator | 2025-03-27 00:32:05.943638 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-27 00:32:05.944555 | orchestrator | Thursday 27 March 2025 00:32:05 +0000 (0:00:13.708) 0:01:01.562 ******** 2025-03-27 00:32:06.682112 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:06.687834 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:06.694557 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:06.694599 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:06.696417 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:06.697583 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:06.697890 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:06.697916 | orchestrator | 2025-03-27 00:32:06.698262 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-27 00:32:06.698810 | orchestrator | Thursday 27 March 2025 00:32:06 +0000 (0:00:00.748) 0:01:02.310 ******** 2025-03-27 00:32:07.648566 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:07.648884 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:07.649553 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:07.650228 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:07.650936 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:07.654777 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:07.655154 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:07.655662 | orchestrator | 2025-03-27 00:32:07.656192 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-27 00:32:07.656798 | orchestrator | Thursday 27 March 2025 00:32:07 +0000 (0:00:00.966) 0:01:03.277 ******** 2025-03-27 00:32:07.737880 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:07.776482 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:07.810532 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:07.838249 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:07.923607 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:07.924877 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:07.924920 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:07.924991 | orchestrator | 2025-03-27 00:32:07.925009 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-27 00:32:07.925550 | orchestrator | Thursday 27 March 2025 00:32:07 +0000 (0:00:00.275) 0:01:03.552 ******** 2025-03-27 00:32:08.027534 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:08.059906 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:08.093063 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:08.137017 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:08.220184 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:08.222893 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:08.223164 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:08.223262 | orchestrator | 2025-03-27 00:32:08.223789 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-27 00:32:08.225052 | orchestrator | Thursday 27 March 2025 00:32:08 +0000 (0:00:00.294) 0:01:03.847 ******** 2025-03-27 00:32:08.561477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:32:08.562926 | orchestrator | 2025-03-27 00:32:08.564095 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-27 00:32:08.566263 | orchestrator | Thursday 27 March 2025 00:32:08 +0000 (0:00:00.342) 0:01:04.190 ******** 2025-03-27 00:32:10.356684 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:10.357883 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:10.359120 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:10.360608 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:10.361575 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:10.363184 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:10.364192 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:10.364860 | orchestrator | 2025-03-27 00:32:10.365530 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-27 00:32:10.366611 | orchestrator | Thursday 27 March 2025 00:32:10 +0000 (0:00:01.792) 0:01:05.982 ******** 2025-03-27 00:32:10.989702 | orchestrator | changed: [testbed-manager] 2025-03-27 00:32:10.990350 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:32:10.991686 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:32:10.993131 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:32:10.993554 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:32:10.994581 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:32:10.995023 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:32:10.996246 | orchestrator | 2025-03-27 00:32:10.996540 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-27 00:32:10.997385 | orchestrator | Thursday 27 March 2025 00:32:10 +0000 (0:00:00.634) 0:01:06.616 ******** 2025-03-27 00:32:11.099295 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:11.127457 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:11.154726 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:11.197182 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:11.268986 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:11.270114 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:11.271285 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:11.272255 | orchestrator | 2025-03-27 00:32:11.272899 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-27 00:32:11.275221 | orchestrator | Thursday 27 March 2025 00:32:11 +0000 (0:00:00.281) 0:01:06.898 ******** 2025-03-27 00:32:12.433395 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:12.433546 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:12.433568 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:12.433588 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:12.434849 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:12.435623 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:12.435649 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:12.435665 | orchestrator | 2025-03-27 00:32:12.435681 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-27 00:32:12.435704 | orchestrator | Thursday 27 March 2025 00:32:12 +0000 (0:00:01.162) 0:01:08.061 ******** 2025-03-27 00:32:14.130606 | orchestrator | changed: [testbed-manager] 2025-03-27 00:32:14.131476 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:32:14.133042 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:32:14.134485 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:32:14.135910 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:32:14.136723 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:32:14.138483 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:32:14.140031 | orchestrator | 2025-03-27 00:32:14.141318 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-27 00:32:14.142461 | orchestrator | Thursday 27 March 2025 00:32:14 +0000 (0:00:01.695) 0:01:09.757 ******** 2025-03-27 00:32:16.694542 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:16.696019 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:16.696307 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:16.696340 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:16.697888 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:16.698521 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:16.700311 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:16.700717 | orchestrator | 2025-03-27 00:32:16.701374 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-27 00:32:16.702149 | orchestrator | Thursday 27 March 2025 00:32:16 +0000 (0:00:02.563) 0:01:12.321 ******** 2025-03-27 00:32:55.225802 | orchestrator | ok: [testbed-manager] 2025-03-27 00:32:55.225984 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:32:55.226010 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:32:55.226851 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:32:55.228789 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:32:55.229245 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:32:55.230260 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:32:55.231362 | orchestrator | 2025-03-27 00:32:55.234144 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-27 00:32:55.236126 | orchestrator | Thursday 27 March 2025 00:32:55 +0000 (0:00:38.526) 0:01:50.848 ******** 2025-03-27 00:34:16.044076 | orchestrator | changed: [testbed-manager] 2025-03-27 00:34:16.045528 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:34:16.045572 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:34:16.045593 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:34:16.045809 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:34:16.047214 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:34:16.047625 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:34:16.048388 | orchestrator | 2025-03-27 00:34:16.048931 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-27 00:34:16.049321 | orchestrator | Thursday 27 March 2025 00:34:16 +0000 (0:01:20.819) 0:03:11.667 ******** 2025-03-27 00:34:17.854249 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:17.855087 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:17.855125 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:17.855147 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:17.856272 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:17.857967 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:17.858811 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:17.859533 | orchestrator | 2025-03-27 00:34:17.860229 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-27 00:34:17.861255 | orchestrator | Thursday 27 March 2025 00:34:17 +0000 (0:00:01.809) 0:03:13.476 ******** 2025-03-27 00:34:32.173894 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:32.175293 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:32.175375 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:32.175433 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:32.175450 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:32.175468 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:32.176306 | orchestrator | changed: [testbed-manager] 2025-03-27 00:34:32.177347 | orchestrator | 2025-03-27 00:34:32.179371 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-27 00:34:32.179929 | orchestrator | Thursday 27 March 2025 00:34:32 +0000 (0:00:14.322) 0:03:27.799 ******** 2025-03-27 00:34:32.680569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-27 00:34:32.680966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-27 00:34:32.681833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-27 00:34:32.682747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-27 00:34:32.683551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-27 00:34:32.683656 | orchestrator | 2025-03-27 00:34:32.684464 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-27 00:34:32.685398 | orchestrator | Thursday 27 March 2025 00:34:32 +0000 (0:00:00.509) 0:03:28.308 ******** 2025-03-27 00:34:32.745149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-27 00:34:32.745330 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-27 00:34:32.769645 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:32.802755 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:34:32.803264 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-27 00:34:32.803314 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-27 00:34:32.831105 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:34:32.863365 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:34:34.419126 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 00:34:34.420636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 00:34:34.420667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 00:34:34.420687 | orchestrator | 2025-03-27 00:34:34.421116 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-27 00:34:34.421428 | orchestrator | Thursday 27 March 2025 00:34:34 +0000 (0:00:01.734) 0:03:30.043 ******** 2025-03-27 00:34:34.523308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-27 00:34:34.524362 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-27 00:34:34.524925 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-27 00:34:34.525306 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-27 00:34:34.525938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-27 00:34:34.527497 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-27 00:34:34.529367 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-27 00:34:34.529521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-27 00:34:34.530824 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-27 00:34:34.530975 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-27 00:34:34.530998 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-27 00:34:34.531013 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-27 00:34:34.531053 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-27 00:34:34.531072 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-27 00:34:34.531297 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-27 00:34:34.531507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-27 00:34:34.531808 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-27 00:34:34.532142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-27 00:34:34.561990 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:34.637456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-27 00:34:34.637560 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-27 00:34:34.637596 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:34:34.638364 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-27 00:34:34.639324 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-27 00:34:34.640130 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-27 00:34:34.640846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-27 00:34:34.642646 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-27 00:34:34.643377 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-27 00:34:34.644595 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-27 00:34:34.645356 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-27 00:34:34.646211 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-27 00:34:34.646691 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-27 00:34:34.647822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-27 00:34:34.648593 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-27 00:34:34.649211 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-27 00:34:34.650274 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-27 00:34:34.651273 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-27 00:34:34.652160 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-27 00:34:34.653211 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-27 00:34:34.653982 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-27 00:34:34.654758 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-27 00:34:34.655173 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-27 00:34:34.666114 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:34:34.695475 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:34:39.708405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-27 00:34:39.709375 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-27 00:34:39.710280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-27 00:34:39.713674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-27 00:34:39.713898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-27 00:34:39.714772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-27 00:34:39.716778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-27 00:34:39.718338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-27 00:34:39.719802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-27 00:34:39.720907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-27 00:34:39.721682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-27 00:34:39.722339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-27 00:34:39.723359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-27 00:34:39.724308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-27 00:34:39.724957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-27 00:34:39.725872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-27 00:34:39.726507 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-27 00:34:39.727061 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-27 00:34:39.727411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-27 00:34:39.728441 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-27 00:34:39.729224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-27 00:34:39.729680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-27 00:34:39.730524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-27 00:34:39.730964 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-27 00:34:39.732403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-27 00:34:39.733776 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-27 00:34:39.734584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-27 00:34:39.735083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-27 00:34:39.736919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-27 00:34:39.737639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-27 00:34:39.738699 | orchestrator | 2025-03-27 00:34:39.739335 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-27 00:34:39.739673 | orchestrator | Thursday 27 March 2025 00:34:39 +0000 (0:00:05.291) 0:03:35.334 ******** 2025-03-27 00:34:41.374555 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.375175 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.377596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.378567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.380071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.380391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.380418 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-27 00:34:41.381049 | orchestrator | 2025-03-27 00:34:41.381511 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-27 00:34:41.382305 | orchestrator | Thursday 27 March 2025 00:34:41 +0000 (0:00:01.667) 0:03:37.002 ******** 2025-03-27 00:34:41.435157 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-27 00:34:41.462270 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:41.547968 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-27 00:34:41.548608 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-27 00:34:41.902407 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:34:41.902509 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:34:41.906151 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-27 00:34:41.907123 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:34:41.907151 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-27 00:34:41.909053 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-27 00:34:41.910220 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-27 00:34:41.911433 | orchestrator | 2025-03-27 00:34:41.912175 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-27 00:34:41.912942 | orchestrator | Thursday 27 March 2025 00:34:41 +0000 (0:00:00.527) 0:03:37.530 ******** 2025-03-27 00:34:41.959100 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-27 00:34:41.985552 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:42.070692 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-27 00:34:42.502303 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-27 00:34:42.506692 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:34:42.507492 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:34:42.507526 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-27 00:34:42.507548 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:34:42.507862 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-27 00:34:42.509157 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-27 00:34:42.510363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-27 00:34:42.511080 | orchestrator | 2025-03-27 00:34:42.512001 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-27 00:34:42.512642 | orchestrator | Thursday 27 March 2025 00:34:42 +0000 (0:00:00.600) 0:03:38.130 ******** 2025-03-27 00:34:42.586472 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:42.613515 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:34:42.640621 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:34:42.669564 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:34:42.830669 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:34:42.833338 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:34:42.835969 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:34:42.836269 | orchestrator | 2025-03-27 00:34:42.837334 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-27 00:34:42.838156 | orchestrator | Thursday 27 March 2025 00:34:42 +0000 (0:00:00.323) 0:03:38.454 ******** 2025-03-27 00:34:48.900336 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:48.900632 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:48.901028 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:48.901875 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:48.902862 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:48.903263 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:48.904240 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:48.905429 | orchestrator | 2025-03-27 00:34:48.906112 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-27 00:34:48.906969 | orchestrator | Thursday 27 March 2025 00:34:48 +0000 (0:00:06.073) 0:03:44.527 ******** 2025-03-27 00:34:48.983021 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-27 00:34:48.985859 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-27 00:34:49.015149 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:49.069236 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:34:49.106094 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-27 00:34:49.106124 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-27 00:34:49.106143 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:34:49.168241 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-27 00:34:49.170652 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:34:49.170994 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-27 00:34:49.254376 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:34:49.254625 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:34:49.255268 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-27 00:34:49.255742 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:34:49.257378 | orchestrator | 2025-03-27 00:34:49.258470 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-27 00:34:49.259919 | orchestrator | Thursday 27 March 2025 00:34:49 +0000 (0:00:00.354) 0:03:44.882 ******** 2025-03-27 00:34:50.472060 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-27 00:34:50.475297 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-27 00:34:50.475360 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-27 00:34:50.478647 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-27 00:34:50.478689 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-27 00:34:50.478715 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-27 00:34:50.479628 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-27 00:34:50.480511 | orchestrator | 2025-03-27 00:34:50.481459 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-27 00:34:50.482830 | orchestrator | Thursday 27 March 2025 00:34:50 +0000 (0:00:01.217) 0:03:46.099 ******** 2025-03-27 00:34:51.019555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:34:51.020577 | orchestrator | 2025-03-27 00:34:51.021401 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-27 00:34:51.022288 | orchestrator | Thursday 27 March 2025 00:34:51 +0000 (0:00:00.546) 0:03:46.646 ******** 2025-03-27 00:34:52.342653 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:52.343398 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:52.345096 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:52.345345 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:52.346494 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:52.347271 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:52.348261 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:52.349159 | orchestrator | 2025-03-27 00:34:52.349846 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-27 00:34:52.350376 | orchestrator | Thursday 27 March 2025 00:34:52 +0000 (0:00:01.324) 0:03:47.970 ******** 2025-03-27 00:34:52.983773 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:52.983949 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:52.984802 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:52.985652 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:52.986461 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:52.986853 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:52.987490 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:52.987875 | orchestrator | 2025-03-27 00:34:52.988375 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-27 00:34:52.988825 | orchestrator | Thursday 27 March 2025 00:34:52 +0000 (0:00:00.641) 0:03:48.611 ******** 2025-03-27 00:34:53.665866 | orchestrator | changed: [testbed-manager] 2025-03-27 00:34:53.666647 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:34:53.667464 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:34:53.668819 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:34:53.670318 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:34:53.671507 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:34:53.672386 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:34:53.672975 | orchestrator | 2025-03-27 00:34:53.673955 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-27 00:34:53.674685 | orchestrator | Thursday 27 March 2025 00:34:53 +0000 (0:00:00.678) 0:03:49.290 ******** 2025-03-27 00:34:54.271673 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:54.271813 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:54.272777 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:54.275945 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:54.276382 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:54.276413 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:54.277905 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:54.279913 | orchestrator | 2025-03-27 00:34:54.280931 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-27 00:34:54.281809 | orchestrator | Thursday 27 March 2025 00:34:54 +0000 (0:00:00.609) 0:03:49.900 ******** 2025-03-27 00:34:55.306895 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033865.2492936, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.308109 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033921.1340127, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.309550 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033926.5832539, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.310824 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033865.208015, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.312058 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033873.2984934, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.312347 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033873.864953, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.313236 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743033878.5080216, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.313863 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033889.725704, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.314605 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033803.505076, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.314823 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033861.872572, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.315645 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033851.0049453, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.316513 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033806.9493039, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.317941 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033812.376953, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.319322 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743033814.2424617, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 00:34:55.320247 | orchestrator | 2025-03-27 00:34:55.320845 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-27 00:34:55.321543 | orchestrator | Thursday 27 March 2025 00:34:55 +0000 (0:00:01.031) 0:03:50.932 ******** 2025-03-27 00:34:56.457895 | orchestrator | changed: [testbed-manager] 2025-03-27 00:34:56.458262 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:34:56.459566 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:34:56.460679 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:34:56.462288 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:34:56.463418 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:34:56.463826 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:34:56.464314 | orchestrator | 2025-03-27 00:34:56.464939 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-27 00:34:56.465361 | orchestrator | Thursday 27 March 2025 00:34:56 +0000 (0:00:01.151) 0:03:52.083 ******** 2025-03-27 00:34:57.708612 | orchestrator | changed: [testbed-manager] 2025-03-27 00:34:57.709419 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:34:57.713120 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:34:57.715089 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:34:57.715118 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:34:57.715133 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:34:57.715152 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:34:57.715919 | orchestrator | 2025-03-27 00:34:57.715947 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-27 00:34:57.715970 | orchestrator | Thursday 27 March 2025 00:34:57 +0000 (0:00:01.250) 0:03:53.334 ******** 2025-03-27 00:34:57.839317 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:34:57.882258 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:34:57.919429 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:34:57.953332 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:34:58.040232 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:34:58.041064 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:34:58.042296 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:34:58.043114 | orchestrator | 2025-03-27 00:34:58.045127 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-27 00:34:58.850947 | orchestrator | Thursday 27 March 2025 00:34:58 +0000 (0:00:00.334) 0:03:53.668 ******** 2025-03-27 00:34:58.851060 | orchestrator | ok: [testbed-manager] 2025-03-27 00:34:58.853774 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:34:58.854451 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:34:58.854479 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:34:58.854499 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:34:58.855531 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:34:58.857046 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:34:58.857390 | orchestrator | 2025-03-27 00:34:58.858236 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-27 00:34:58.858922 | orchestrator | Thursday 27 March 2025 00:34:58 +0000 (0:00:00.808) 0:03:54.477 ******** 2025-03-27 00:34:59.258674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:34:59.260151 | orchestrator | 2025-03-27 00:34:59.260558 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-27 00:34:59.261769 | orchestrator | Thursday 27 March 2025 00:34:59 +0000 (0:00:00.409) 0:03:54.887 ******** 2025-03-27 00:35:07.473591 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:07.474162 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:35:07.474844 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:35:07.476256 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:35:07.477337 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:35:07.477967 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:35:07.479228 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:35:07.479842 | orchestrator | 2025-03-27 00:35:07.480369 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-27 00:35:07.480904 | orchestrator | Thursday 27 March 2025 00:35:07 +0000 (0:00:08.213) 0:04:03.101 ******** 2025-03-27 00:35:08.928959 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:08.929406 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:08.929449 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:08.929794 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:08.930155 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:08.930701 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:08.931699 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:08.932534 | orchestrator | 2025-03-27 00:35:08.933350 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-27 00:35:08.934097 | orchestrator | Thursday 27 March 2025 00:35:08 +0000 (0:00:01.454) 0:04:04.556 ******** 2025-03-27 00:35:10.051793 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:10.051973 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:10.056585 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:10.057700 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:10.058103 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:10.058130 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:10.058149 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:10.058713 | orchestrator | 2025-03-27 00:35:10.059393 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-27 00:35:10.059985 | orchestrator | Thursday 27 March 2025 00:35:10 +0000 (0:00:01.122) 0:04:05.678 ******** 2025-03-27 00:35:10.488425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:35:10.489653 | orchestrator | 2025-03-27 00:35:10.491305 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-27 00:35:19.389511 | orchestrator | Thursday 27 March 2025 00:35:10 +0000 (0:00:00.437) 0:04:06.116 ******** 2025-03-27 00:35:19.389657 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:35:19.391350 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:35:19.392851 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:35:19.392883 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:35:19.394912 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:35:19.395237 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:35:19.396979 | orchestrator | changed: [testbed-manager] 2025-03-27 00:35:19.398468 | orchestrator | 2025-03-27 00:35:19.398701 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-27 00:35:19.399359 | orchestrator | Thursday 27 March 2025 00:35:19 +0000 (0:00:08.899) 0:04:15.015 ******** 2025-03-27 00:35:20.071356 | orchestrator | changed: [testbed-manager] 2025-03-27 00:35:20.073815 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:35:20.077144 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:35:20.078245 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:35:20.079573 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:35:20.079827 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:35:20.081122 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:35:20.081894 | orchestrator | 2025-03-27 00:35:20.083262 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-27 00:35:20.086274 | orchestrator | Thursday 27 March 2025 00:35:20 +0000 (0:00:00.681) 0:04:15.696 ******** 2025-03-27 00:35:21.291441 | orchestrator | changed: [testbed-manager] 2025-03-27 00:35:21.293040 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:35:21.293069 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:35:21.293083 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:35:21.293096 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:35:21.293131 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:35:21.293153 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:35:21.295051 | orchestrator | 2025-03-27 00:35:21.295372 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-27 00:35:21.295767 | orchestrator | Thursday 27 March 2025 00:35:21 +0000 (0:00:01.219) 0:04:16.916 ******** 2025-03-27 00:35:22.450753 | orchestrator | changed: [testbed-manager] 2025-03-27 00:35:22.450926 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:35:22.451234 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:35:22.451714 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:35:22.457179 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:35:22.457622 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:35:22.457717 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:35:22.457732 | orchestrator | 2025-03-27 00:35:22.457747 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-27 00:35:22.457773 | orchestrator | Thursday 27 March 2025 00:35:22 +0000 (0:00:01.161) 0:04:18.078 ******** 2025-03-27 00:35:22.557656 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:22.595076 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:22.642784 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:22.691689 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:22.767612 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:22.772436 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:22.774360 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:22.774385 | orchestrator | 2025-03-27 00:35:22.774405 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-27 00:35:22.848329 | orchestrator | Thursday 27 March 2025 00:35:22 +0000 (0:00:00.318) 0:04:18.396 ******** 2025-03-27 00:35:22.848382 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:22.916289 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:22.976352 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:23.012404 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:23.120336 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:23.121813 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:23.123291 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:23.126417 | orchestrator | 2025-03-27 00:35:23.256221 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-27 00:35:23.256278 | orchestrator | Thursday 27 March 2025 00:35:23 +0000 (0:00:00.352) 0:04:18.749 ******** 2025-03-27 00:35:23.256300 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:23.297060 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:23.350068 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:23.396688 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:23.485008 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:23.486084 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:23.487635 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:23.490950 | orchestrator | 2025-03-27 00:35:28.792939 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-27 00:35:28.793068 | orchestrator | Thursday 27 March 2025 00:35:23 +0000 (0:00:00.364) 0:04:19.113 ******** 2025-03-27 00:35:28.793103 | orchestrator | ok: [testbed-manager] 2025-03-27 00:35:28.793654 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:35:28.793685 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:35:28.793708 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:35:28.796303 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:35:28.796341 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:35:28.796869 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:35:28.797275 | orchestrator | 2025-03-27 00:35:28.797830 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-27 00:35:28.799401 | orchestrator | Thursday 27 March 2025 00:35:28 +0000 (0:00:05.303) 0:04:24.417 ******** 2025-03-27 00:35:29.284055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:35:29.287090 | orchestrator | 2025-03-27 00:35:29.377404 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-27 00:35:29.377439 | orchestrator | Thursday 27 March 2025 00:35:29 +0000 (0:00:00.492) 0:04:24.910 ******** 2025-03-27 00:35:29.377461 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.428373 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-27 00:35:29.428402 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.428417 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-27 00:35:29.428437 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:35:29.429058 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.429745 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-27 00:35:29.473938 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:35:29.528318 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.528349 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-27 00:35:29.528370 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:35:29.529343 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.582582 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-27 00:35:29.583058 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:35:29.583092 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.684259 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:35:29.684753 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-27 00:35:29.685797 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:35:29.686474 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-27 00:35:29.686764 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-27 00:35:29.687359 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:35:29.687854 | orchestrator | 2025-03-27 00:35:29.688275 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-27 00:35:29.688701 | orchestrator | Thursday 27 March 2025 00:35:29 +0000 (0:00:00.402) 0:04:25.313 ******** 2025-03-27 00:35:30.305304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:35:30.353122 | orchestrator | 2025-03-27 00:35:30.397538 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-27 00:35:30.397578 | orchestrator | Thursday 27 March 2025 00:35:30 +0000 (0:00:00.618) 0:04:25.932 ******** 2025-03-27 00:35:30.397619 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-27 00:35:30.397769 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-27 00:35:30.436546 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:35:30.490100 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:35:30.491322 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-27 00:35:30.534098 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-27 00:35:30.534135 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:35:30.574253 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-27 00:35:30.574289 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:35:30.658457 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-27 00:35:30.659101 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:35:30.660598 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:35:30.661441 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-27 00:35:30.662648 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:35:30.663803 | orchestrator | 2025-03-27 00:35:30.664520 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-27 00:35:30.665404 | orchestrator | Thursday 27 March 2025 00:35:30 +0000 (0:00:00.355) 0:04:26.287 ******** 2025-03-27 00:35:31.286800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:35:31.287523 | orchestrator | 2025-03-27 00:35:31.288107 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-27 00:35:31.289091 | orchestrator | Thursday 27 March 2025 00:35:31 +0000 (0:00:00.623) 0:04:26.911 ******** 2025-03-27 00:36:05.791705 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:05.792981 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:05.793018 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:05.793033 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:05.793056 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:05.794667 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:05.795816 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:05.797222 | orchestrator | 2025-03-27 00:36:05.798257 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-27 00:36:05.799084 | orchestrator | Thursday 27 March 2025 00:36:05 +0000 (0:00:34.505) 0:05:01.416 ******** 2025-03-27 00:36:14.106749 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:14.107320 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:14.107707 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:14.108315 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:14.108834 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:14.109376 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:14.111356 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:14.112031 | orchestrator | 2025-03-27 00:36:14.112907 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-27 00:36:14.114489 | orchestrator | Thursday 27 March 2025 00:36:14 +0000 (0:00:08.317) 0:05:09.733 ******** 2025-03-27 00:36:22.355462 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:22.356211 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:22.357100 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:22.358377 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:22.359983 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:22.360399 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:22.361303 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:22.361781 | orchestrator | 2025-03-27 00:36:22.362651 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-27 00:36:22.363157 | orchestrator | Thursday 27 March 2025 00:36:22 +0000 (0:00:08.249) 0:05:17.983 ******** 2025-03-27 00:36:24.178742 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:24.179073 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:24.181075 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:24.182483 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:24.182513 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:24.183134 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:24.184395 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:24.184898 | orchestrator | 2025-03-27 00:36:24.185727 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-27 00:36:24.186267 | orchestrator | Thursday 27 March 2025 00:36:24 +0000 (0:00:01.821) 0:05:19.804 ******** 2025-03-27 00:36:30.169720 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:30.178329 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:30.184161 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:30.184242 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:30.184279 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:30.721811 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:30.721930 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:30.721948 | orchestrator | 2025-03-27 00:36:30.721964 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-27 00:36:30.721980 | orchestrator | Thursday 27 March 2025 00:36:30 +0000 (0:00:05.992) 0:05:25.797 ******** 2025-03-27 00:36:30.722009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:36:30.722157 | orchestrator | 2025-03-27 00:36:30.722532 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-27 00:36:30.723341 | orchestrator | Thursday 27 March 2025 00:36:30 +0000 (0:00:00.554) 0:05:26.352 ******** 2025-03-27 00:36:31.519699 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:31.520483 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:31.521882 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:31.521944 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:31.522743 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:31.523514 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:31.524065 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:31.525026 | orchestrator | 2025-03-27 00:36:31.526754 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-27 00:36:31.527411 | orchestrator | Thursday 27 March 2025 00:36:31 +0000 (0:00:00.794) 0:05:27.146 ******** 2025-03-27 00:36:33.304871 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:33.305789 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:33.305821 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:33.305844 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:33.306080 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:33.307000 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:33.308016 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:33.308403 | orchestrator | 2025-03-27 00:36:33.311571 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-27 00:36:33.312567 | orchestrator | Thursday 27 March 2025 00:36:33 +0000 (0:00:01.785) 0:05:28.932 ******** 2025-03-27 00:36:34.124528 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:34.124683 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:34.124747 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:34.126711 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:34.127149 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:34.127724 | orchestrator | changed: [testbed-manager] 2025-03-27 00:36:34.127755 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:34.128587 | orchestrator | 2025-03-27 00:36:34.130781 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-27 00:36:34.131464 | orchestrator | Thursday 27 March 2025 00:36:34 +0000 (0:00:00.820) 0:05:29.753 ******** 2025-03-27 00:36:34.230947 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:34.277523 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:34.318879 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:34.388149 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:34.471070 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:34.472682 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:34.474099 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:34.475039 | orchestrator | 2025-03-27 00:36:34.476309 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-27 00:36:34.477154 | orchestrator | Thursday 27 March 2025 00:36:34 +0000 (0:00:00.343) 0:05:30.096 ******** 2025-03-27 00:36:34.544968 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:34.586605 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:34.633871 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:34.668069 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:34.703884 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:34.925307 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:34.926243 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:34.930334 | orchestrator | 2025-03-27 00:36:35.055554 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-27 00:36:35.055599 | orchestrator | Thursday 27 March 2025 00:36:34 +0000 (0:00:00.457) 0:05:30.554 ******** 2025-03-27 00:36:35.055621 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:35.101838 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:35.139559 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:35.182430 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:35.273534 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:35.273953 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:35.275017 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:35.276208 | orchestrator | 2025-03-27 00:36:35.276775 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-27 00:36:35.277820 | orchestrator | Thursday 27 March 2025 00:36:35 +0000 (0:00:00.348) 0:05:30.902 ******** 2025-03-27 00:36:35.407357 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:35.449222 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:35.490091 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:35.527485 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:35.611209 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:35.611410 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:35.612606 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:35.613654 | orchestrator | 2025-03-27 00:36:35.614417 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-27 00:36:35.615141 | orchestrator | Thursday 27 March 2025 00:36:35 +0000 (0:00:00.336) 0:05:31.239 ******** 2025-03-27 00:36:35.716478 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:35.760337 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:35.798481 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:35.842542 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:35.941602 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:35.943564 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:35.944088 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:35.944999 | orchestrator | 2025-03-27 00:36:35.945342 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-27 00:36:35.945764 | orchestrator | Thursday 27 March 2025 00:36:35 +0000 (0:00:00.330) 0:05:31.569 ******** 2025-03-27 00:36:36.053137 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:36.090582 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:36.130550 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:36.166795 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:36.241085 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:36.241548 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:36.242744 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:36.243121 | orchestrator | 2025-03-27 00:36:36.243154 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-27 00:36:36.322455 | orchestrator | Thursday 27 March 2025 00:36:36 +0000 (0:00:00.300) 0:05:31.869 ******** 2025-03-27 00:36:36.322549 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:36.408527 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:36.455326 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:36.620572 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:36.700455 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:36.701135 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:36.702067 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:36.703046 | orchestrator | 2025-03-27 00:36:36.703557 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-27 00:36:36.704375 | orchestrator | Thursday 27 March 2025 00:36:36 +0000 (0:00:00.459) 0:05:32.329 ******** 2025-03-27 00:36:37.164834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:36:37.165144 | orchestrator | 2025-03-27 00:36:37.165832 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-27 00:36:37.168096 | orchestrator | Thursday 27 March 2025 00:36:37 +0000 (0:00:00.462) 0:05:32.791 ******** 2025-03-27 00:36:38.253007 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:38.253408 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:38.254630 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:38.256500 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:38.257277 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:38.258417 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:38.259072 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:38.259555 | orchestrator | 2025-03-27 00:36:38.260356 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-27 00:36:38.261024 | orchestrator | Thursday 27 March 2025 00:36:38 +0000 (0:00:01.086) 0:05:33.877 ******** 2025-03-27 00:36:41.236791 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:36:41.237998 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:36:41.244011 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:36:41.245281 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:36:41.246355 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:36:41.247226 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:41.248378 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:36:41.249112 | orchestrator | 2025-03-27 00:36:41.252096 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-27 00:36:41.313471 | orchestrator | Thursday 27 March 2025 00:36:41 +0000 (0:00:02.986) 0:05:36.864 ******** 2025-03-27 00:36:41.313529 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-27 00:36:41.420656 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-27 00:36:41.421931 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-27 00:36:41.423095 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-27 00:36:41.424302 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-27 00:36:41.425397 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-27 00:36:41.495954 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:36:41.496571 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-27 00:36:41.497038 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-27 00:36:41.597448 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:36:41.598003 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-27 00:36:41.599596 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-27 00:36:41.600291 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-27 00:36:41.600801 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-27 00:36:41.695887 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:36:41.697281 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-27 00:36:41.697806 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-27 00:36:41.697989 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-27 00:36:41.772490 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:36:41.773172 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-27 00:36:41.774117 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-27 00:36:41.774640 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-27 00:36:41.906689 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:36:41.907275 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:36:41.908108 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-27 00:36:41.908131 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-27 00:36:41.908151 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-27 00:36:41.909127 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:36:41.909408 | orchestrator | 2025-03-27 00:36:41.909436 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-27 00:36:41.910200 | orchestrator | Thursday 27 March 2025 00:36:41 +0000 (0:00:00.669) 0:05:37.533 ******** 2025-03-27 00:36:48.854012 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:48.855564 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:48.856376 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:48.858530 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:48.858978 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:48.861149 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:48.861381 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:48.862232 | orchestrator | 2025-03-27 00:36:48.863885 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-27 00:36:48.865195 | orchestrator | Thursday 27 March 2025 00:36:48 +0000 (0:00:06.946) 0:05:44.480 ******** 2025-03-27 00:36:50.126594 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:50.126761 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:50.127802 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:50.128814 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:50.129876 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:50.130138 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:50.130164 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:50.131504 | orchestrator | 2025-03-27 00:36:50.132267 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-27 00:36:50.132523 | orchestrator | Thursday 27 March 2025 00:36:50 +0000 (0:00:01.270) 0:05:45.751 ******** 2025-03-27 00:36:58.071895 | orchestrator | ok: [testbed-manager] 2025-03-27 00:36:58.074700 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:36:58.078332 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:36:58.078362 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:36:58.078383 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:36:58.079588 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:36:58.079617 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:36:58.080433 | orchestrator | 2025-03-27 00:36:58.080517 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-27 00:36:58.080779 | orchestrator | Thursday 27 March 2025 00:36:58 +0000 (0:00:07.943) 0:05:53.695 ******** 2025-03-27 00:37:01.290258 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:01.290450 | orchestrator | changed: [testbed-manager] 2025-03-27 00:37:01.291006 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:01.291584 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:01.295365 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:01.296090 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:01.296511 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:01.297335 | orchestrator | 2025-03-27 00:37:01.297914 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-27 00:37:01.298347 | orchestrator | Thursday 27 March 2025 00:37:01 +0000 (0:00:03.222) 0:05:56.918 ******** 2025-03-27 00:37:02.905654 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:02.906922 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:02.907350 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:02.907381 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:02.907809 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:02.909038 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:02.909103 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:02.909534 | orchestrator | 2025-03-27 00:37:02.910104 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-27 00:37:02.910450 | orchestrator | Thursday 27 March 2025 00:37:02 +0000 (0:00:01.613) 0:05:58.531 ******** 2025-03-27 00:37:04.369113 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:04.369600 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:04.369651 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:04.370699 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:04.370863 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:04.372700 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:04.373767 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:04.376019 | orchestrator | 2025-03-27 00:37:04.384653 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-27 00:37:04.385462 | orchestrator | Thursday 27 March 2025 00:37:04 +0000 (0:00:01.462) 0:05:59.994 ******** 2025-03-27 00:37:04.594565 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:04.711442 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:04.781160 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:04.869366 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:05.103101 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:05.104531 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:05.105488 | orchestrator | changed: [testbed-manager] 2025-03-27 00:37:05.107171 | orchestrator | 2025-03-27 00:37:05.108551 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-27 00:37:05.109830 | orchestrator | Thursday 27 March 2025 00:37:05 +0000 (0:00:00.738) 0:06:00.732 ******** 2025-03-27 00:37:15.037432 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:15.037614 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:15.037644 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:15.038911 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:15.039889 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:15.041002 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:15.041749 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:15.042212 | orchestrator | 2025-03-27 00:37:15.042743 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-27 00:37:15.043241 | orchestrator | Thursday 27 March 2025 00:37:15 +0000 (0:00:09.927) 0:06:10.659 ******** 2025-03-27 00:37:16.059550 | orchestrator | changed: [testbed-manager] 2025-03-27 00:37:16.060098 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:16.060135 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:16.061245 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:16.062139 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:16.062750 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:16.063307 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:16.063892 | orchestrator | 2025-03-27 00:37:16.064810 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-27 00:37:16.065242 | orchestrator | Thursday 27 March 2025 00:37:16 +0000 (0:00:01.026) 0:06:11.686 ******** 2025-03-27 00:37:28.731286 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:28.731938 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:28.731993 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:28.733155 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:28.734141 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:28.735081 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:28.735447 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:28.736051 | orchestrator | 2025-03-27 00:37:28.736744 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-27 00:37:28.737454 | orchestrator | Thursday 27 March 2025 00:37:28 +0000 (0:00:12.667) 0:06:24.353 ******** 2025-03-27 00:37:41.376618 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:41.379554 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:41.379656 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:41.379732 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:41.380098 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:41.380367 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:41.380811 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:41.381373 | orchestrator | 2025-03-27 00:37:41.384850 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-27 00:37:41.384953 | orchestrator | Thursday 27 March 2025 00:37:41 +0000 (0:00:12.645) 0:06:36.999 ******** 2025-03-27 00:37:41.805523 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-27 00:37:42.654677 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-27 00:37:42.655169 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-27 00:37:42.656517 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-27 00:37:42.660137 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-27 00:37:42.660716 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-27 00:37:42.661216 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-27 00:37:42.661829 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-27 00:37:42.662554 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-27 00:37:42.662902 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-27 00:37:42.663624 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-27 00:37:42.664566 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-27 00:37:42.665027 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-27 00:37:42.665503 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-27 00:37:42.665962 | orchestrator | 2025-03-27 00:37:42.666475 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-27 00:37:42.667099 | orchestrator | Thursday 27 March 2025 00:37:42 +0000 (0:00:01.281) 0:06:38.281 ******** 2025-03-27 00:37:42.812525 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:42.889606 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:42.963503 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:43.043484 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:43.111384 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:43.232071 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:43.233422 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:43.233467 | orchestrator | 2025-03-27 00:37:43.237126 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-27 00:37:47.539289 | orchestrator | Thursday 27 March 2025 00:37:43 +0000 (0:00:00.577) 0:06:38.858 ******** 2025-03-27 00:37:47.539449 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:47.541584 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:47.541741 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:47.542834 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:47.543997 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:47.544404 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:47.545074 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:47.546530 | orchestrator | 2025-03-27 00:37:47.547394 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-27 00:37:47.547769 | orchestrator | Thursday 27 March 2025 00:37:47 +0000 (0:00:04.305) 0:06:43.164 ******** 2025-03-27 00:37:47.673869 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:47.926477 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:47.994768 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:48.074700 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:48.150720 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:48.252328 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:48.253351 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:48.254143 | orchestrator | 2025-03-27 00:37:48.254855 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-27 00:37:48.258118 | orchestrator | Thursday 27 March 2025 00:37:48 +0000 (0:00:00.715) 0:06:43.879 ******** 2025-03-27 00:37:48.343917 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-27 00:37:48.344482 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-27 00:37:48.418760 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:48.419678 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-27 00:37:48.420412 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-27 00:37:48.513277 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:48.514280 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-27 00:37:48.514957 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-27 00:37:48.606201 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:48.606299 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-27 00:37:48.606517 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-27 00:37:48.694252 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:48.694668 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-27 00:37:48.695809 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-27 00:37:48.777944 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:48.778905 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-27 00:37:48.783195 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-27 00:37:48.916722 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:48.917745 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-27 00:37:48.918485 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-27 00:37:48.922542 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:49.058436 | orchestrator | 2025-03-27 00:37:49.058489 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-27 00:37:49.058507 | orchestrator | Thursday 27 March 2025 00:37:48 +0000 (0:00:00.664) 0:06:44.544 ******** 2025-03-27 00:37:49.058530 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:49.129342 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:49.201725 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:49.266213 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:49.332765 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:49.452385 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:49.453421 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:49.454406 | orchestrator | 2025-03-27 00:37:49.456213 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-27 00:37:49.457435 | orchestrator | Thursday 27 March 2025 00:37:49 +0000 (0:00:00.537) 0:06:45.081 ******** 2025-03-27 00:37:49.583375 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:49.653310 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:49.718839 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:49.782394 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:49.855802 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:49.951108 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:49.952063 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:49.953395 | orchestrator | 2025-03-27 00:37:49.954888 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-27 00:37:49.956084 | orchestrator | Thursday 27 March 2025 00:37:49 +0000 (0:00:00.496) 0:06:45.577 ******** 2025-03-27 00:37:50.096523 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:37:50.158630 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:37:50.235518 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:37:50.315122 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:37:50.380559 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:37:50.514014 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:37:50.514385 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:37:50.516498 | orchestrator | 2025-03-27 00:37:50.519088 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-27 00:37:56.986685 | orchestrator | Thursday 27 March 2025 00:37:50 +0000 (0:00:00.563) 0:06:46.141 ******** 2025-03-27 00:37:56.986833 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:56.988327 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:56.988371 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:56.990717 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:56.991874 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:56.993221 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:56.993579 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:56.994818 | orchestrator | 2025-03-27 00:37:56.996140 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-27 00:37:57.940666 | orchestrator | Thursday 27 March 2025 00:37:56 +0000 (0:00:06.471) 0:06:52.613 ******** 2025-03-27 00:37:57.940794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:37:57.941625 | orchestrator | 2025-03-27 00:37:57.942490 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-27 00:37:57.942552 | orchestrator | Thursday 27 March 2025 00:37:57 +0000 (0:00:00.954) 0:06:53.568 ******** 2025-03-27 00:37:58.470377 | orchestrator | ok: [testbed-manager] 2025-03-27 00:37:58.984339 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:37:58.984517 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:37:58.984838 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:37:58.985398 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:37:58.985684 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:37:58.986102 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:37:58.986910 | orchestrator | 2025-03-27 00:37:58.987959 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-27 00:37:58.988447 | orchestrator | Thursday 27 March 2025 00:37:58 +0000 (0:00:01.044) 0:06:54.613 ******** 2025-03-27 00:37:59.718510 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:00.183023 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:00.184192 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:00.186111 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:00.187091 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:00.187122 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:00.188068 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:00.189375 | orchestrator | 2025-03-27 00:38:00.189631 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-27 00:38:00.190111 | orchestrator | Thursday 27 March 2025 00:38:00 +0000 (0:00:01.196) 0:06:55.810 ******** 2025-03-27 00:38:01.792783 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:01.793334 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:01.794843 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:01.794890 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:01.795742 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:01.796813 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:01.798044 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:01.798828 | orchestrator | 2025-03-27 00:38:01.799869 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-27 00:38:01.929374 | orchestrator | Thursday 27 March 2025 00:38:01 +0000 (0:00:01.606) 0:06:57.416 ******** 2025-03-27 00:38:01.929415 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:03.278705 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:03.279115 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:03.279435 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:03.283283 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:03.283521 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:03.284057 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:03.284774 | orchestrator | 2025-03-27 00:38:03.286472 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-27 00:38:03.287272 | orchestrator | Thursday 27 March 2025 00:38:03 +0000 (0:00:01.490) 0:06:58.906 ******** 2025-03-27 00:38:04.741872 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:04.744227 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:04.745560 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:04.746669 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:04.746974 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:04.748339 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:04.748851 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:04.749600 | orchestrator | 2025-03-27 00:38:04.751075 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-27 00:38:04.751450 | orchestrator | Thursday 27 March 2025 00:38:04 +0000 (0:00:01.442) 0:07:00.349 ******** 2025-03-27 00:38:06.360405 | orchestrator | changed: [testbed-manager] 2025-03-27 00:38:06.363525 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:06.363600 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:06.363618 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:06.363637 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:06.366218 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:06.367284 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:06.368105 | orchestrator | 2025-03-27 00:38:06.369039 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-27 00:38:06.369396 | orchestrator | Thursday 27 March 2025 00:38:06 +0000 (0:00:01.635) 0:07:01.984 ******** 2025-03-27 00:38:07.530669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:07.530898 | orchestrator | 2025-03-27 00:38:07.531281 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-27 00:38:07.531786 | orchestrator | Thursday 27 March 2025 00:38:07 +0000 (0:00:01.171) 0:07:03.156 ******** 2025-03-27 00:38:08.981650 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:08.981803 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:08.981829 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:08.982901 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:08.985983 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:08.986086 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:08.986107 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:08.986121 | orchestrator | 2025-03-27 00:38:08.986136 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-27 00:38:08.986155 | orchestrator | Thursday 27 March 2025 00:38:08 +0000 (0:00:01.453) 0:07:04.609 ******** 2025-03-27 00:38:10.241816 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:10.242617 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:10.245555 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:10.247103 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:10.247168 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:10.248418 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:10.248901 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:10.249696 | orchestrator | 2025-03-27 00:38:10.250546 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-27 00:38:10.252574 | orchestrator | Thursday 27 March 2025 00:38:10 +0000 (0:00:01.257) 0:07:05.867 ******** 2025-03-27 00:38:11.734332 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:11.734682 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:11.736629 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:11.736846 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:11.737544 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:11.737970 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:11.738557 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:11.739377 | orchestrator | 2025-03-27 00:38:11.739904 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-27 00:38:11.740586 | orchestrator | Thursday 27 March 2025 00:38:11 +0000 (0:00:01.493) 0:07:07.360 ******** 2025-03-27 00:38:12.842404 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:12.842625 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:12.843360 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:12.844702 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:12.845611 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:12.846365 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:12.846850 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:12.847670 | orchestrator | 2025-03-27 00:38:12.848051 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-27 00:38:12.848606 | orchestrator | Thursday 27 March 2025 00:38:12 +0000 (0:00:01.105) 0:07:08.466 ******** 2025-03-27 00:38:13.982820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:13.983026 | orchestrator | 2025-03-27 00:38:13.983647 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.983945 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.819) 0:07:09.286 ******** 2025-03-27 00:38:13.984436 | orchestrator | 2025-03-27 00:38:13.984891 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.985237 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.041) 0:07:09.327 ******** 2025-03-27 00:38:13.985869 | orchestrator | 2025-03-27 00:38:13.986200 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.986966 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.043) 0:07:09.371 ******** 2025-03-27 00:38:13.987335 | orchestrator | 2025-03-27 00:38:13.987737 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.988167 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.042) 0:07:09.413 ******** 2025-03-27 00:38:13.989266 | orchestrator | 2025-03-27 00:38:13.989336 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.989834 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.038) 0:07:09.452 ******** 2025-03-27 00:38:13.990145 | orchestrator | 2025-03-27 00:38:13.990412 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.990795 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.047) 0:07:09.500 ******** 2025-03-27 00:38:13.992485 | orchestrator | 2025-03-27 00:38:13.993618 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-27 00:38:13.994619 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.067) 0:07:09.568 ******** 2025-03-27 00:38:13.995270 | orchestrator | 2025-03-27 00:38:13.996060 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-27 00:38:13.996744 | orchestrator | Thursday 27 March 2025 00:38:13 +0000 (0:00:00.043) 0:07:09.612 ******** 2025-03-27 00:38:15.169794 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:15.171075 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:15.171780 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:15.172462 | orchestrator | 2025-03-27 00:38:15.173433 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-27 00:38:15.174430 | orchestrator | Thursday 27 March 2025 00:38:15 +0000 (0:00:01.182) 0:07:10.794 ******** 2025-03-27 00:38:16.800498 | orchestrator | changed: [testbed-manager] 2025-03-27 00:38:16.800673 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:16.801554 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:16.805450 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:16.805812 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:16.806652 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:16.807150 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:16.808068 | orchestrator | 2025-03-27 00:38:16.809396 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-27 00:38:16.811057 | orchestrator | Thursday 27 March 2025 00:38:16 +0000 (0:00:01.631) 0:07:12.425 ******** 2025-03-27 00:38:17.984716 | orchestrator | changed: [testbed-manager] 2025-03-27 00:38:17.985464 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:17.989412 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:17.990364 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:17.990747 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:17.991815 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:17.992438 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:17.994967 | orchestrator | 2025-03-27 00:38:17.995420 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-27 00:38:17.995585 | orchestrator | Thursday 27 March 2025 00:38:17 +0000 (0:00:01.184) 0:07:13.610 ******** 2025-03-27 00:38:18.138538 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:20.228971 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:20.230300 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:20.230376 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:20.232892 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:20.233128 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:20.234106 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:20.234651 | orchestrator | 2025-03-27 00:38:20.235826 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-27 00:38:20.346807 | orchestrator | Thursday 27 March 2025 00:38:20 +0000 (0:00:02.243) 0:07:15.854 ******** 2025-03-27 00:38:20.346871 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:20.348073 | orchestrator | 2025-03-27 00:38:20.349040 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-27 00:38:20.349856 | orchestrator | Thursday 27 March 2025 00:38:20 +0000 (0:00:00.121) 0:07:15.976 ******** 2025-03-27 00:38:21.397515 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:21.397882 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:21.400849 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:21.402374 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:21.403300 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:21.403884 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:21.404564 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:21.405456 | orchestrator | 2025-03-27 00:38:21.406219 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-27 00:38:21.406848 | orchestrator | Thursday 27 March 2025 00:38:21 +0000 (0:00:01.047) 0:07:17.023 ******** 2025-03-27 00:38:21.544411 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:21.615398 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:21.695507 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:21.966090 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:22.033109 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:22.142321 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:22.142445 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:22.143555 | orchestrator | 2025-03-27 00:38:22.144386 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-27 00:38:22.145097 | orchestrator | Thursday 27 March 2025 00:38:22 +0000 (0:00:00.746) 0:07:17.770 ******** 2025-03-27 00:38:23.125648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:23.126861 | orchestrator | 2025-03-27 00:38:23.128170 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-27 00:38:23.128750 | orchestrator | Thursday 27 March 2025 00:38:23 +0000 (0:00:00.982) 0:07:18.752 ******** 2025-03-27 00:38:24.012519 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:24.012679 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:24.012700 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:24.015109 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:24.015976 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:24.021945 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:26.766624 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:26.766753 | orchestrator | 2025-03-27 00:38:26.766774 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-27 00:38:26.766792 | orchestrator | Thursday 27 March 2025 00:38:24 +0000 (0:00:00.888) 0:07:19.640 ******** 2025-03-27 00:38:26.766824 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-27 00:38:26.767065 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-27 00:38:26.768476 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-27 00:38:26.769268 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-27 00:38:26.773020 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-27 00:38:26.773636 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-27 00:38:26.773666 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-27 00:38:26.774856 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-27 00:38:26.775756 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-27 00:38:26.776500 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-27 00:38:26.777104 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-27 00:38:26.777725 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-27 00:38:26.778451 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-27 00:38:26.778948 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-27 00:38:26.779672 | orchestrator | 2025-03-27 00:38:26.780023 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-27 00:38:26.782167 | orchestrator | Thursday 27 March 2025 00:38:26 +0000 (0:00:02.752) 0:07:22.393 ******** 2025-03-27 00:38:26.932986 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:27.008322 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:27.113062 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:27.178094 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:27.247503 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:27.402320 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:27.410495 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:27.411622 | orchestrator | 2025-03-27 00:38:27.411660 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-27 00:38:27.413024 | orchestrator | Thursday 27 March 2025 00:38:27 +0000 (0:00:00.633) 0:07:23.026 ******** 2025-03-27 00:38:28.357397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:28.795739 | orchestrator | 2025-03-27 00:38:28.795858 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-27 00:38:28.795875 | orchestrator | Thursday 27 March 2025 00:38:28 +0000 (0:00:00.952) 0:07:23.979 ******** 2025-03-27 00:38:28.795936 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:29.553890 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:29.554509 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:29.558166 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:29.558235 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:29.559448 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:29.559471 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:29.559489 | orchestrator | 2025-03-27 00:38:29.560228 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-27 00:38:29.561005 | orchestrator | Thursday 27 March 2025 00:38:29 +0000 (0:00:01.201) 0:07:25.180 ******** 2025-03-27 00:38:29.993395 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:30.407235 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:30.407979 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:30.408301 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:30.409083 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:30.410097 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:30.410439 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:30.411368 | orchestrator | 2025-03-27 00:38:30.411715 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-27 00:38:30.412084 | orchestrator | Thursday 27 March 2025 00:38:30 +0000 (0:00:00.853) 0:07:26.034 ******** 2025-03-27 00:38:30.549785 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:30.625588 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:30.705682 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:30.789738 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:30.855793 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:30.978159 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:30.980348 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:30.981608 | orchestrator | 2025-03-27 00:38:30.982762 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-27 00:38:30.983412 | orchestrator | Thursday 27 March 2025 00:38:30 +0000 (0:00:00.571) 0:07:26.605 ******** 2025-03-27 00:38:32.581384 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:32.582471 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:32.583424 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:32.583821 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:32.584330 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:32.588158 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:32.592616 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:32.592654 | orchestrator | 2025-03-27 00:38:32.592677 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-27 00:38:32.593409 | orchestrator | Thursday 27 March 2025 00:38:32 +0000 (0:00:01.602) 0:07:28.207 ******** 2025-03-27 00:38:32.717413 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:32.793880 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:32.866832 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:32.958644 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:33.043296 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:33.156620 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:33.157641 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:33.159287 | orchestrator | 2025-03-27 00:38:33.160241 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-27 00:38:33.160535 | orchestrator | Thursday 27 March 2025 00:38:33 +0000 (0:00:00.575) 0:07:28.783 ******** 2025-03-27 00:38:35.331899 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:35.332324 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:35.337071 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:35.337330 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:35.338624 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:35.339835 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:35.341266 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:35.342125 | orchestrator | 2025-03-27 00:38:35.342247 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-27 00:38:35.343254 | orchestrator | Thursday 27 March 2025 00:38:35 +0000 (0:00:02.170) 0:07:30.953 ******** 2025-03-27 00:38:36.782393 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:36.782571 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:36.784349 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:36.785914 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:36.786900 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:36.788263 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:36.788349 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:36.789442 | orchestrator | 2025-03-27 00:38:36.790118 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-27 00:38:36.790807 | orchestrator | Thursday 27 March 2025 00:38:36 +0000 (0:00:01.456) 0:07:32.410 ******** 2025-03-27 00:38:38.617452 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:38.617612 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:38.618715 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:38.619729 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:38.622133 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:38.622599 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:38.622628 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:38.623416 | orchestrator | 2025-03-27 00:38:38.623963 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-27 00:38:38.624564 | orchestrator | Thursday 27 March 2025 00:38:38 +0000 (0:00:01.832) 0:07:34.243 ******** 2025-03-27 00:38:40.418479 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:40.419374 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:38:40.420900 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:38:40.421534 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:38:40.422704 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:38:40.423671 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:38:40.424357 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:38:40.424820 | orchestrator | 2025-03-27 00:38:40.425559 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-27 00:38:40.426415 | orchestrator | Thursday 27 March 2025 00:38:40 +0000 (0:00:01.800) 0:07:36.043 ******** 2025-03-27 00:38:41.089800 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:41.161539 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:41.630917 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:41.632274 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:41.632314 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:41.632335 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:41.633397 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:41.634225 | orchestrator | 2025-03-27 00:38:41.635475 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-27 00:38:41.636141 | orchestrator | Thursday 27 March 2025 00:38:41 +0000 (0:00:01.210) 0:07:37.254 ******** 2025-03-27 00:38:41.772851 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:41.848993 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:41.920488 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:41.988385 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:42.063571 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:42.509901 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:42.510518 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:42.512148 | orchestrator | 2025-03-27 00:38:42.513247 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-27 00:38:42.515307 | orchestrator | Thursday 27 March 2025 00:38:42 +0000 (0:00:00.882) 0:07:38.136 ******** 2025-03-27 00:38:42.668132 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:42.737291 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:42.815868 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:42.879996 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:42.960479 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:43.079094 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:43.079241 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:43.079602 | orchestrator | 2025-03-27 00:38:43.080134 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-27 00:38:43.081069 | orchestrator | Thursday 27 March 2025 00:38:43 +0000 (0:00:00.572) 0:07:38.709 ******** 2025-03-27 00:38:43.219579 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:43.297739 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:43.372284 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:43.442056 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:43.520238 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:43.630989 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:43.632051 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:43.633261 | orchestrator | 2025-03-27 00:38:43.634327 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-27 00:38:43.636400 | orchestrator | Thursday 27 March 2025 00:38:43 +0000 (0:00:00.544) 0:07:39.253 ******** 2025-03-27 00:38:44.028044 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:44.114917 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:44.193038 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:44.272553 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:44.345808 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:44.461640 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:44.463074 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:44.463389 | orchestrator | 2025-03-27 00:38:44.467401 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-27 00:38:44.610719 | orchestrator | Thursday 27 March 2025 00:38:44 +0000 (0:00:00.835) 0:07:40.089 ******** 2025-03-27 00:38:44.610767 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:44.694965 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:44.773690 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:44.844153 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:44.914635 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:45.042384 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:45.042509 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:45.043459 | orchestrator | 2025-03-27 00:38:45.043823 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-27 00:38:45.044598 | orchestrator | Thursday 27 March 2025 00:38:45 +0000 (0:00:00.583) 0:07:40.672 ******** 2025-03-27 00:38:50.617474 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:50.618405 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:50.620570 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:50.622004 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:50.628574 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:50.630105 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:50.631788 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:50.632003 | orchestrator | 2025-03-27 00:38:50.632980 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-27 00:38:50.633326 | orchestrator | Thursday 27 March 2025 00:38:50 +0000 (0:00:05.570) 0:07:46.242 ******** 2025-03-27 00:38:50.760411 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:38:50.907688 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:38:50.983211 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:38:51.047891 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:38:51.163347 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:38:51.164208 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:38:51.165389 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:38:51.167164 | orchestrator | 2025-03-27 00:38:51.167484 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-27 00:38:51.169144 | orchestrator | Thursday 27 March 2025 00:38:51 +0000 (0:00:00.547) 0:07:46.790 ******** 2025-03-27 00:38:52.275428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:52.275601 | orchestrator | 2025-03-27 00:38:52.276167 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-27 00:38:52.276362 | orchestrator | Thursday 27 March 2025 00:38:52 +0000 (0:00:01.111) 0:07:47.902 ******** 2025-03-27 00:38:54.336121 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:54.336817 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:54.339608 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:54.341819 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:54.344332 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:54.345089 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:54.345647 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:54.346483 | orchestrator | 2025-03-27 00:38:54.347168 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-27 00:38:54.347301 | orchestrator | Thursday 27 March 2025 00:38:54 +0000 (0:00:02.056) 0:07:49.958 ******** 2025-03-27 00:38:55.691629 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:55.692398 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:55.692537 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:55.693283 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:55.693798 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:55.694268 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:55.694581 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:55.694995 | orchestrator | 2025-03-27 00:38:55.695634 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-27 00:38:55.695917 | orchestrator | Thursday 27 March 2025 00:38:55 +0000 (0:00:01.360) 0:07:51.319 ******** 2025-03-27 00:38:56.174588 | orchestrator | ok: [testbed-manager] 2025-03-27 00:38:56.632573 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:38:56.633537 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:38:56.634261 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:38:56.634335 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:38:56.634908 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:38:56.635202 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:38:56.638990 | orchestrator | 2025-03-27 00:38:58.749621 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-27 00:38:58.749734 | orchestrator | Thursday 27 March 2025 00:38:56 +0000 (0:00:00.939) 0:07:52.258 ******** 2025-03-27 00:38:58.749768 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.750389 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.750416 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.750439 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.750639 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.752101 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.753659 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-27 00:38:58.753945 | orchestrator | 2025-03-27 00:38:58.754372 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-27 00:38:58.754803 | orchestrator | Thursday 27 March 2025 00:38:58 +0000 (0:00:02.116) 0:07:54.374 ******** 2025-03-27 00:38:59.656335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:38:59.656841 | orchestrator | 2025-03-27 00:38:59.657586 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-27 00:38:59.658264 | orchestrator | Thursday 27 March 2025 00:38:59 +0000 (0:00:00.908) 0:07:55.283 ******** 2025-03-27 00:39:09.373638 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:09.375366 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:09.375931 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:09.378763 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:09.380297 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:09.380628 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:09.381303 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:09.381960 | orchestrator | 2025-03-27 00:39:09.382390 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-27 00:39:09.382837 | orchestrator | Thursday 27 March 2025 00:39:09 +0000 (0:00:09.715) 0:08:04.998 ******** 2025-03-27 00:39:11.214767 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:11.215422 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:11.216049 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:11.219431 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:11.219516 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:11.219536 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:11.219551 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:11.219565 | orchestrator | 2025-03-27 00:39:11.219581 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-27 00:39:11.219601 | orchestrator | Thursday 27 March 2025 00:39:11 +0000 (0:00:01.842) 0:08:06.841 ******** 2025-03-27 00:39:12.582545 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:12.583172 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:12.584909 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:12.586836 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:12.587601 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:12.589589 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:12.591096 | orchestrator | 2025-03-27 00:39:12.591969 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-27 00:39:12.592381 | orchestrator | Thursday 27 March 2025 00:39:12 +0000 (0:00:01.365) 0:08:08.206 ******** 2025-03-27 00:39:14.133737 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:14.137294 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:14.139080 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:14.139094 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:14.139103 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:14.139111 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:14.139118 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:14.139128 | orchestrator | 2025-03-27 00:39:14.139895 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-27 00:39:14.139909 | orchestrator | 2025-03-27 00:39:14.139920 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-27 00:39:14.140237 | orchestrator | Thursday 27 March 2025 00:39:14 +0000 (0:00:01.555) 0:08:09.762 ******** 2025-03-27 00:39:14.291637 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:39:14.356153 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:39:14.445838 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:39:14.509333 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:39:14.577707 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:39:14.715242 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:39:14.715947 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:39:14.718260 | orchestrator | 2025-03-27 00:39:14.719887 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-27 00:39:14.719911 | orchestrator | 2025-03-27 00:39:14.720733 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-27 00:39:14.722304 | orchestrator | Thursday 27 March 2025 00:39:14 +0000 (0:00:00.581) 0:08:10.343 ******** 2025-03-27 00:39:16.152911 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:16.153695 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:16.154783 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:16.155407 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:16.156019 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:16.156748 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:16.157679 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:16.158095 | orchestrator | 2025-03-27 00:39:16.159011 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-27 00:39:16.159458 | orchestrator | Thursday 27 March 2025 00:39:16 +0000 (0:00:01.435) 0:08:11.779 ******** 2025-03-27 00:39:17.696259 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:17.698936 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:17.701423 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:17.701955 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:17.702869 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:17.703603 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:17.704069 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:17.704783 | orchestrator | 2025-03-27 00:39:17.705293 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-27 00:39:17.706209 | orchestrator | Thursday 27 March 2025 00:39:17 +0000 (0:00:01.540) 0:08:13.319 ******** 2025-03-27 00:39:17.834833 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:39:18.150440 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:39:18.240446 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:39:18.318623 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:39:18.391149 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:39:18.859801 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:39:18.861243 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:39:18.862242 | orchestrator | 2025-03-27 00:39:18.863572 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-27 00:39:18.864397 | orchestrator | Thursday 27 March 2025 00:39:18 +0000 (0:00:01.167) 0:08:14.486 ******** 2025-03-27 00:39:20.181061 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:20.181265 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:20.181799 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:20.183030 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:20.183731 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:20.184623 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:20.185472 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:20.188878 | orchestrator | 2025-03-27 00:39:21.266525 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-27 00:39:21.266609 | orchestrator | 2025-03-27 00:39:21.266616 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-27 00:39:21.266622 | orchestrator | Thursday 27 March 2025 00:39:20 +0000 (0:00:01.323) 0:08:15.809 ******** 2025-03-27 00:39:21.266637 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:39:21.267233 | orchestrator | 2025-03-27 00:39:21.267883 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-27 00:39:21.270261 | orchestrator | Thursday 27 March 2025 00:39:21 +0000 (0:00:01.083) 0:08:16.893 ******** 2025-03-27 00:39:21.802500 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:22.250459 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:22.250809 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:22.251568 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:22.252662 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:22.253447 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:22.255075 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:22.255801 | orchestrator | 2025-03-27 00:39:22.256517 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-27 00:39:22.257673 | orchestrator | Thursday 27 March 2025 00:39:22 +0000 (0:00:00.983) 0:08:17.876 ******** 2025-03-27 00:39:23.455604 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:23.456120 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:23.458002 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:23.459276 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:23.460085 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:23.461211 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:23.461605 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:23.462751 | orchestrator | 2025-03-27 00:39:23.463277 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-27 00:39:23.464322 | orchestrator | Thursday 27 March 2025 00:39:23 +0000 (0:00:01.204) 0:08:19.081 ******** 2025-03-27 00:39:24.565163 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:39:24.566079 | orchestrator | 2025-03-27 00:39:24.566316 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-27 00:39:24.566656 | orchestrator | Thursday 27 March 2025 00:39:24 +0000 (0:00:01.110) 0:08:20.192 ******** 2025-03-27 00:39:25.038409 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:25.493594 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:25.493751 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:25.494805 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:25.496625 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:25.497683 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:25.501134 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:26.695424 | orchestrator | 2025-03-27 00:39:26.695525 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-27 00:39:26.695542 | orchestrator | Thursday 27 March 2025 00:39:25 +0000 (0:00:00.931) 0:08:21.123 ******** 2025-03-27 00:39:26.695571 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:26.696109 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:26.697420 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:26.698846 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:26.700259 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:26.701057 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:26.702498 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:26.703298 | orchestrator | 2025-03-27 00:39:26.707723 | orchestrator | 2025-03-27 00:39:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:39:26.708246 | orchestrator | 2025-03-27 00:39:26 | INFO  | Please wait and do not abort execution. 2025-03-27 00:39:26.708275 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:39:26.722104 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-27 00:39:26.725013 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-27 00:39:26.726061 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-27 00:39:26.726982 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-27 00:39:26.727616 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-27 00:39:26.728128 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-27 00:39:26.728608 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-27 00:39:26.729051 | orchestrator | 2025-03-27 00:39:26.729439 | orchestrator | Thursday 27 March 2025 00:39:26 +0000 (0:00:01.199) 0:08:22.323 ******** 2025-03-27 00:39:26.729924 | orchestrator | =============================================================================== 2025-03-27 00:39:26.730395 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.82s 2025-03-27 00:39:26.730837 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.53s 2025-03-27 00:39:26.731349 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.51s 2025-03-27 00:39:26.731803 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.32s 2025-03-27 00:39:26.732208 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.26s 2025-03-27 00:39:26.732697 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.71s 2025-03-27 00:39:26.733050 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.67s 2025-03-27 00:39:26.733337 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.65s 2025-03-27 00:39:26.733801 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.93s 2025-03-27 00:39:26.734216 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.72s 2025-03-27 00:39:26.734549 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.90s 2025-03-27 00:39:26.734957 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.32s 2025-03-27 00:39:26.735359 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.25s 2025-03-27 00:39:26.735730 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.21s 2025-03-27 00:39:26.736228 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.94s 2025-03-27 00:39:26.736832 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.95s 2025-03-27 00:39:26.737018 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.47s 2025-03-27 00:39:26.737269 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.07s 2025-03-27 00:39:26.738107 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.99s 2025-03-27 00:39:26.738520 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.57s 2025-03-27 00:39:27.539554 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-27 00:39:29.935443 | orchestrator | + osism apply network 2025-03-27 00:39:29.936297 | orchestrator | 2025-03-27 00:39:29 | INFO  | Task 99175512-8a8e-4f8e-872e-b493751faf2d (network) was prepared for execution. 2025-03-27 00:39:33.580475 | orchestrator | 2025-03-27 00:39:29 | INFO  | It takes a moment until task 99175512-8a8e-4f8e-872e-b493751faf2d (network) has been started and output is visible here. 2025-03-27 00:39:33.580618 | orchestrator | 2025-03-27 00:39:33.581950 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-27 00:39:33.583667 | orchestrator | 2025-03-27 00:39:33.584966 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-27 00:39:33.587341 | orchestrator | Thursday 27 March 2025 00:39:33 +0000 (0:00:00.213) 0:00:00.213 ******** 2025-03-27 00:39:33.730796 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:33.820725 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:33.900036 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:33.976237 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:34.056505 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:34.336217 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:34.337383 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:34.337812 | orchestrator | 2025-03-27 00:39:34.339930 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-27 00:39:34.340806 | orchestrator | Thursday 27 March 2025 00:39:34 +0000 (0:00:00.754) 0:00:00.967 ******** 2025-03-27 00:39:35.628024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:39:35.629145 | orchestrator | 2025-03-27 00:39:35.632141 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-27 00:39:35.633265 | orchestrator | Thursday 27 March 2025 00:39:35 +0000 (0:00:01.291) 0:00:02.258 ******** 2025-03-27 00:39:37.737476 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:37.738000 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:37.740392 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:37.742941 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:37.744625 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:37.745743 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:37.746998 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:37.747954 | orchestrator | 2025-03-27 00:39:37.748764 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-27 00:39:37.749210 | orchestrator | Thursday 27 March 2025 00:39:37 +0000 (0:00:02.108) 0:00:04.367 ******** 2025-03-27 00:39:39.568018 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:39.573159 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:39.573257 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:39.573276 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:39.573294 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:39.574562 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:39.575397 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:39.575424 | orchestrator | 2025-03-27 00:39:39.575786 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-27 00:39:39.576421 | orchestrator | Thursday 27 March 2025 00:39:39 +0000 (0:00:01.828) 0:00:06.195 ******** 2025-03-27 00:39:40.163627 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-27 00:39:40.164381 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-27 00:39:40.164926 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-27 00:39:40.854584 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-27 00:39:40.856089 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-27 00:39:40.856132 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-27 00:39:40.856156 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-27 00:39:40.857459 | orchestrator | 2025-03-27 00:39:40.859723 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-27 00:39:42.730496 | orchestrator | Thursday 27 March 2025 00:39:40 +0000 (0:00:01.285) 0:00:07.481 ******** 2025-03-27 00:39:42.730621 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 00:39:42.732037 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:39:42.732752 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-27 00:39:42.732777 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-27 00:39:42.733314 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 00:39:42.734837 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-27 00:39:42.735575 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-27 00:39:42.736345 | orchestrator | 2025-03-27 00:39:42.737258 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-27 00:39:42.738446 | orchestrator | Thursday 27 March 2025 00:39:42 +0000 (0:00:01.880) 0:00:09.361 ******** 2025-03-27 00:39:44.501481 | orchestrator | changed: [testbed-manager] 2025-03-27 00:39:44.504859 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:44.506306 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:44.509601 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:44.510913 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:44.511472 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:44.517199 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:44.522259 | orchestrator | 2025-03-27 00:39:44.522423 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-27 00:39:44.522451 | orchestrator | Thursday 27 March 2025 00:39:44 +0000 (0:00:01.764) 0:00:11.125 ******** 2025-03-27 00:39:45.069107 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:39:45.188080 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 00:39:45.693424 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-27 00:39:45.696263 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-27 00:39:45.698961 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 00:39:45.700035 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-27 00:39:45.700800 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-27 00:39:45.701318 | orchestrator | 2025-03-27 00:39:45.702001 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-27 00:39:45.702369 | orchestrator | Thursday 27 March 2025 00:39:45 +0000 (0:00:01.198) 0:00:12.324 ******** 2025-03-27 00:39:46.168531 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:46.261012 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:46.898791 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:46.899321 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:46.902942 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:46.903012 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:46.903029 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:46.903047 | orchestrator | 2025-03-27 00:39:46.904984 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-27 00:39:46.906451 | orchestrator | Thursday 27 March 2025 00:39:46 +0000 (0:00:01.203) 0:00:13.528 ******** 2025-03-27 00:39:47.108908 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:39:47.204236 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:39:47.292659 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:39:47.398910 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:39:47.487916 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:39:47.826446 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:39:47.826590 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:39:47.828547 | orchestrator | 2025-03-27 00:39:47.830152 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-27 00:39:47.830549 | orchestrator | Thursday 27 March 2025 00:39:47 +0000 (0:00:00.928) 0:00:14.457 ******** 2025-03-27 00:39:49.940520 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:49.943959 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:49.946982 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:49.948827 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:49.949894 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:49.951680 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:49.952069 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:49.952993 | orchestrator | 2025-03-27 00:39:49.954484 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-27 00:39:49.955423 | orchestrator | Thursday 27 March 2025 00:39:49 +0000 (0:00:02.116) 0:00:16.573 ******** 2025-03-27 00:39:52.051130 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-27 00:39:52.051335 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.054814 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.057355 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.057520 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.058649 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.060006 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.060611 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-27 00:39:52.060641 | orchestrator | 2025-03-27 00:39:52.061234 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-27 00:39:52.062080 | orchestrator | Thursday 27 March 2025 00:39:52 +0000 (0:00:02.107) 0:00:18.681 ******** 2025-03-27 00:39:53.667700 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:53.669523 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:39:53.671835 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:39:53.673038 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:39:53.674223 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:39:53.675641 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:39:53.676360 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:39:53.677422 | orchestrator | 2025-03-27 00:39:53.678115 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-27 00:39:53.678853 | orchestrator | Thursday 27 March 2025 00:39:53 +0000 (0:00:01.616) 0:00:20.297 ******** 2025-03-27 00:39:55.267112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:39:55.268137 | orchestrator | 2025-03-27 00:39:55.270518 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-27 00:39:55.271951 | orchestrator | Thursday 27 March 2025 00:39:55 +0000 (0:00:01.599) 0:00:21.897 ******** 2025-03-27 00:39:57.229390 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:57.231380 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:57.232245 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:57.232296 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:57.233429 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:57.234150 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:57.235030 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:57.236102 | orchestrator | 2025-03-27 00:39:57.238223 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-27 00:39:57.238962 | orchestrator | Thursday 27 March 2025 00:39:57 +0000 (0:00:01.964) 0:00:23.861 ******** 2025-03-27 00:39:57.394451 | orchestrator | ok: [testbed-manager] 2025-03-27 00:39:57.480719 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:39:57.749284 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:39:57.841583 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:39:57.946805 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:39:58.112482 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:39:58.112754 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:39:58.116651 | orchestrator | 2025-03-27 00:39:58.116902 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-27 00:39:58.117331 | orchestrator | Thursday 27 March 2025 00:39:58 +0000 (0:00:00.879) 0:00:24.740 ******** 2025-03-27 00:39:58.528929 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:58.529732 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:58.646993 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:58.647198 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:58.750416 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:59.259942 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:59.260064 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:59.261380 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:59.263027 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:59.263512 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:59.266124 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:59.267280 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:59.268290 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-27 00:39:59.270314 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-27 00:39:59.272561 | orchestrator | 2025-03-27 00:39:59.275295 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-27 00:39:59.275366 | orchestrator | Thursday 27 March 2025 00:39:59 +0000 (0:00:01.153) 0:00:25.894 ******** 2025-03-27 00:39:59.645699 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:39:59.731757 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:39:59.828868 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:39:59.915254 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:40:00.004165 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:40:01.275336 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:40:01.276231 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:40:01.279219 | orchestrator | 2025-03-27 00:40:01.280922 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-27 00:40:01.280961 | orchestrator | Thursday 27 March 2025 00:40:01 +0000 (0:00:02.012) 0:00:27.906 ******** 2025-03-27 00:40:01.463886 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:40:01.568231 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:40:01.895618 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:40:01.984148 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:40:02.067464 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:40:02.113786 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:40:02.114542 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:40:02.116444 | orchestrator | 2025-03-27 00:40:02.116527 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:40:02.116993 | orchestrator | 2025-03-27 00:40:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:40:02.117095 | orchestrator | 2025-03-27 00:40:02 | INFO  | Please wait and do not abort execution. 2025-03-27 00:40:02.118091 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.119059 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.119576 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.120753 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.120891 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.121866 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.122268 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:40:02.122308 | orchestrator | 2025-03-27 00:40:02.122574 | orchestrator | Thursday 27 March 2025 00:40:02 +0000 (0:00:00.841) 0:00:28.747 ******** 2025-03-27 00:40:02.123496 | orchestrator | =============================================================================== 2025-03-27 00:40:02.124511 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-03-27 00:40:02.124981 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.11s 2025-03-27 00:40:02.126260 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.11s 2025-03-27 00:40:02.126906 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 2.01s 2025-03-27 00:40:02.127590 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.96s 2025-03-27 00:40:02.128224 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.88s 2025-03-27 00:40:02.128791 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2025-03-27 00:40:02.129493 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.76s 2025-03-27 00:40:02.130089 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-03-27 00:40:02.130494 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.60s 2025-03-27 00:40:02.132302 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.29s 2025-03-27 00:40:02.132865 | orchestrator | osism.commons.network : Create required directories --------------------- 1.29s 2025-03-27 00:40:02.133419 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.20s 2025-03-27 00:40:02.134012 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.20s 2025-03-27 00:40:02.134379 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2025-03-27 00:40:02.134762 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.93s 2025-03-27 00:40:02.135241 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.88s 2025-03-27 00:40:02.135493 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.84s 2025-03-27 00:40:02.136044 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.75s 2025-03-27 00:40:02.762788 | orchestrator | + osism apply wireguard 2025-03-27 00:40:04.288744 | orchestrator | 2025-03-27 00:40:04 | INFO  | Task 11d1fc5b-40f7-47f9-b64e-d2245a2b08d4 (wireguard) was prepared for execution. 2025-03-27 00:40:07.706887 | orchestrator | 2025-03-27 00:40:04 | INFO  | It takes a moment until task 11d1fc5b-40f7-47f9-b64e-d2245a2b08d4 (wireguard) has been started and output is visible here. 2025-03-27 00:40:07.707028 | orchestrator | 2025-03-27 00:40:07.707381 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-27 00:40:07.708394 | orchestrator | 2025-03-27 00:40:07.710924 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-27 00:40:09.336575 | orchestrator | Thursday 27 March 2025 00:40:07 +0000 (0:00:00.185) 0:00:00.185 ******** 2025-03-27 00:40:09.336707 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:09.337598 | orchestrator | 2025-03-27 00:40:09.337632 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-27 00:40:09.337811 | orchestrator | Thursday 27 March 2025 00:40:09 +0000 (0:00:01.627) 0:00:01.812 ******** 2025-03-27 00:40:16.450541 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:16.451233 | orchestrator | 2025-03-27 00:40:16.452115 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-27 00:40:17.026875 | orchestrator | Thursday 27 March 2025 00:40:16 +0000 (0:00:07.114) 0:00:08.927 ******** 2025-03-27 00:40:17.027004 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:17.027764 | orchestrator | 2025-03-27 00:40:17.027800 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-27 00:40:17.028431 | orchestrator | Thursday 27 March 2025 00:40:17 +0000 (0:00:00.575) 0:00:09.503 ******** 2025-03-27 00:40:17.479758 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:17.481550 | orchestrator | 2025-03-27 00:40:17.481615 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-27 00:40:17.481677 | orchestrator | Thursday 27 March 2025 00:40:17 +0000 (0:00:00.454) 0:00:09.958 ******** 2025-03-27 00:40:18.017983 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:18.018215 | orchestrator | 2025-03-27 00:40:18.018509 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-27 00:40:18.018965 | orchestrator | Thursday 27 March 2025 00:40:18 +0000 (0:00:00.538) 0:00:10.496 ******** 2025-03-27 00:40:18.587611 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:18.587730 | orchestrator | 2025-03-27 00:40:18.588443 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-27 00:40:18.588547 | orchestrator | Thursday 27 March 2025 00:40:18 +0000 (0:00:00.570) 0:00:11.066 ******** 2025-03-27 00:40:19.028692 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:19.029121 | orchestrator | 2025-03-27 00:40:19.029442 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-27 00:40:20.340251 | orchestrator | Thursday 27 March 2025 00:40:19 +0000 (0:00:00.440) 0:00:11.507 ******** 2025-03-27 00:40:20.340379 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:20.340780 | orchestrator | 2025-03-27 00:40:20.341887 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-27 00:40:20.341940 | orchestrator | Thursday 27 March 2025 00:40:20 +0000 (0:00:01.308) 0:00:12.816 ******** 2025-03-27 00:40:21.342153 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-27 00:40:21.343050 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:21.344628 | orchestrator | 2025-03-27 00:40:21.345471 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-27 00:40:21.345505 | orchestrator | Thursday 27 March 2025 00:40:21 +0000 (0:00:01.003) 0:00:13.819 ******** 2025-03-27 00:40:23.242095 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:23.242339 | orchestrator | 2025-03-27 00:40:23.244795 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-27 00:40:23.244977 | orchestrator | Thursday 27 March 2025 00:40:23 +0000 (0:00:01.899) 0:00:15.719 ******** 2025-03-27 00:40:24.182880 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:24.183123 | orchestrator | 2025-03-27 00:40:24.184146 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:40:24.184549 | orchestrator | 2025-03-27 00:40:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:40:24.184906 | orchestrator | 2025-03-27 00:40:24 | INFO  | Please wait and do not abort execution. 2025-03-27 00:40:24.185572 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:40:24.186394 | orchestrator | 2025-03-27 00:40:24.187199 | orchestrator | Thursday 27 March 2025 00:40:24 +0000 (0:00:00.942) 0:00:16.662 ******** 2025-03-27 00:40:24.187528 | orchestrator | =============================================================================== 2025-03-27 00:40:24.187933 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.11s 2025-03-27 00:40:24.188432 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.90s 2025-03-27 00:40:24.188870 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.63s 2025-03-27 00:40:24.189371 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.31s 2025-03-27 00:40:24.190220 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2025-03-27 00:40:24.190512 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-03-27 00:40:24.190839 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-03-27 00:40:24.191193 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.57s 2025-03-27 00:40:24.191428 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-03-27 00:40:24.191744 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-03-27 00:40:24.192204 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-03-27 00:40:24.839280 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-27 00:40:24.875441 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-27 00:40:24.970513 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-27 00:40:24.970592 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 158 0 --:--:-- --:--:-- --:--:-- 159 2025-03-27 00:40:24.989234 | orchestrator | + osism apply --environment custom workarounds 2025-03-27 00:40:26.559421 | orchestrator | 2025-03-27 00:40:26 | INFO  | Trying to run play workarounds in environment custom 2025-03-27 00:40:26.626223 | orchestrator | 2025-03-27 00:40:26 | INFO  | Task bd8110a0-3920-492c-8de8-58e615eb92cb (workarounds) was prepared for execution. 2025-03-27 00:40:30.074530 | orchestrator | 2025-03-27 00:40:26 | INFO  | It takes a moment until task bd8110a0-3920-492c-8de8-58e615eb92cb (workarounds) has been started and output is visible here. 2025-03-27 00:40:30.074675 | orchestrator | 2025-03-27 00:40:30.078502 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:40:30.078739 | orchestrator | 2025-03-27 00:40:30.078770 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-27 00:40:30.079676 | orchestrator | Thursday 27 March 2025 00:40:30 +0000 (0:00:00.153) 0:00:00.153 ******** 2025-03-27 00:40:30.264444 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-27 00:40:30.352035 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-27 00:40:30.468901 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-27 00:40:30.573348 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-27 00:40:30.672056 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-27 00:40:30.993431 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-27 00:40:30.995125 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-27 00:40:30.995602 | orchestrator | 2025-03-27 00:40:30.997085 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-27 00:40:31.000659 | orchestrator | 2025-03-27 00:40:31.000734 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-27 00:40:31.001334 | orchestrator | Thursday 27 March 2025 00:40:30 +0000 (0:00:00.916) 0:00:01.070 ******** 2025-03-27 00:40:33.909337 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:33.910139 | orchestrator | 2025-03-27 00:40:33.910283 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-27 00:40:33.910315 | orchestrator | 2025-03-27 00:40:33.910397 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-27 00:40:35.868166 | orchestrator | Thursday 27 March 2025 00:40:33 +0000 (0:00:02.913) 0:00:03.984 ******** 2025-03-27 00:40:35.868397 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:40:35.868485 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:40:35.870279 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:40:35.871289 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:40:35.872686 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:40:35.873811 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:40:35.874618 | orchestrator | 2025-03-27 00:40:35.875471 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-27 00:40:35.875687 | orchestrator | 2025-03-27 00:40:35.876693 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-27 00:40:35.877282 | orchestrator | Thursday 27 March 2025 00:40:35 +0000 (0:00:01.957) 0:00:05.941 ******** 2025-03-27 00:40:37.529881 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.531243 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.534239 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.536321 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.536357 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.536821 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-27 00:40:37.537783 | orchestrator | 2025-03-27 00:40:37.538433 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-27 00:40:37.538905 | orchestrator | Thursday 27 March 2025 00:40:37 +0000 (0:00:01.665) 0:00:07.607 ******** 2025-03-27 00:40:41.091583 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:40:41.095307 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:40:41.095389 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:40:41.095430 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:40:41.097908 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:40:41.097956 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:40:41.097970 | orchestrator | 2025-03-27 00:40:41.097996 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-27 00:40:41.100192 | orchestrator | Thursday 27 March 2025 00:40:41 +0000 (0:00:03.563) 0:00:11.170 ******** 2025-03-27 00:40:41.262227 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:40:41.348502 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:40:41.435759 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:40:41.720732 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:40:41.866113 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:40:41.866372 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:40:41.867106 | orchestrator | 2025-03-27 00:40:41.868543 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-27 00:40:41.869001 | orchestrator | 2025-03-27 00:40:41.869936 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-27 00:40:41.870393 | orchestrator | Thursday 27 March 2025 00:40:41 +0000 (0:00:00.774) 0:00:11.945 ******** 2025-03-27 00:40:43.653054 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:40:43.653273 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:43.654102 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:40:43.654793 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:40:43.657311 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:40:43.657640 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:40:43.658608 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:40:43.659446 | orchestrator | 2025-03-27 00:40:43.660485 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-27 00:40:43.660848 | orchestrator | Thursday 27 March 2025 00:40:43 +0000 (0:00:01.785) 0:00:13.730 ******** 2025-03-27 00:40:45.397645 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:45.398213 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:40:45.399487 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:40:45.400828 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:40:45.401317 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:40:45.402453 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:40:45.404151 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:40:45.404345 | orchestrator | 2025-03-27 00:40:45.405501 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-27 00:40:45.405837 | orchestrator | Thursday 27 March 2025 00:40:45 +0000 (0:00:01.742) 0:00:15.472 ******** 2025-03-27 00:40:47.023877 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:40:47.024214 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:40:47.025163 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:40:47.030197 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:40:47.030686 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:47.031852 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:40:47.032848 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:40:47.033145 | orchestrator | 2025-03-27 00:40:47.034155 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-27 00:40:47.034370 | orchestrator | Thursday 27 March 2025 00:40:47 +0000 (0:00:01.631) 0:00:17.104 ******** 2025-03-27 00:40:48.947592 | orchestrator | changed: [testbed-manager] 2025-03-27 00:40:48.950281 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:40:48.950334 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:40:48.951573 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:40:48.951600 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:40:48.951620 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:40:48.954553 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:40:48.955334 | orchestrator | 2025-03-27 00:40:48.956136 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-27 00:40:48.957300 | orchestrator | Thursday 27 March 2025 00:40:48 +0000 (0:00:01.922) 0:00:19.027 ******** 2025-03-27 00:40:49.143024 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:40:49.223863 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:40:49.313990 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:40:49.393776 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:40:49.665035 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:40:49.826667 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:40:49.827006 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:40:49.828506 | orchestrator | 2025-03-27 00:40:49.834146 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-27 00:40:52.536270 | orchestrator | 2025-03-27 00:40:52.536387 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-27 00:40:52.536406 | orchestrator | Thursday 27 March 2025 00:40:49 +0000 (0:00:00.881) 0:00:19.908 ******** 2025-03-27 00:40:52.536490 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:40:52.536615 | orchestrator | ok: [testbed-manager] 2025-03-27 00:40:52.536638 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:40:52.536951 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:40:52.537921 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:40:52.540708 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:40:52.542059 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:40:52.542880 | orchestrator | 2025-03-27 00:40:52.543899 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:40:52.544071 | orchestrator | 2025-03-27 00:40:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:40:52.545124 | orchestrator | 2025-03-27 00:40:52 | INFO  | Please wait and do not abort execution. 2025-03-27 00:40:52.545200 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:40:52.545547 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.546115 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.546380 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.546807 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.547388 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.547475 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:40:52.547805 | orchestrator | 2025-03-27 00:40:52.549000 | orchestrator | Thursday 27 March 2025 00:40:52 +0000 (0:00:02.705) 0:00:22.614 ******** 2025-03-27 00:40:52.549595 | orchestrator | =============================================================================== 2025-03-27 00:40:52.550105 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.56s 2025-03-27 00:40:52.550648 | orchestrator | Apply netplan configuration --------------------------------------------- 2.91s 2025-03-27 00:40:52.551164 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2025-03-27 00:40:52.551566 | orchestrator | Apply netplan configuration --------------------------------------------- 1.96s 2025-03-27 00:40:52.552456 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2025-03-27 00:40:52.552744 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.79s 2025-03-27 00:40:52.553125 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2025-03-27 00:40:52.553405 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.67s 2025-03-27 00:40:52.553756 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2025-03-27 00:40:52.554079 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.92s 2025-03-27 00:40:52.554379 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.88s 2025-03-27 00:40:52.554726 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-03-27 00:40:53.202895 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-27 00:40:54.740323 | orchestrator | 2025-03-27 00:40:54 | INFO  | Task c2f6377b-3e24-4c25-ae82-1da3e2b29b08 (reboot) was prepared for execution. 2025-03-27 00:40:58.059604 | orchestrator | 2025-03-27 00:40:54 | INFO  | It takes a moment until task c2f6377b-3e24-4c25-ae82-1da3e2b29b08 (reboot) has been started and output is visible here. 2025-03-27 00:40:58.059747 | orchestrator | 2025-03-27 00:40:58.061946 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:40:58.062438 | orchestrator | 2025-03-27 00:40:58.064037 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:40:58.064433 | orchestrator | Thursday 27 March 2025 00:40:58 +0000 (0:00:00.162) 0:00:00.162 ******** 2025-03-27 00:40:58.176594 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:40:58.178138 | orchestrator | 2025-03-27 00:40:59.168640 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:40:59.168762 | orchestrator | Thursday 27 March 2025 00:40:58 +0000 (0:00:00.120) 0:00:00.283 ******** 2025-03-27 00:40:59.168795 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:40:59.169336 | orchestrator | 2025-03-27 00:40:59.169682 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:40:59.170077 | orchestrator | Thursday 27 March 2025 00:40:59 +0000 (0:00:00.988) 0:00:01.271 ******** 2025-03-27 00:40:59.284440 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:40:59.285356 | orchestrator | 2025-03-27 00:40:59.286906 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:40:59.287386 | orchestrator | 2025-03-27 00:40:59.287415 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:40:59.288311 | orchestrator | Thursday 27 March 2025 00:40:59 +0000 (0:00:00.115) 0:00:01.386 ******** 2025-03-27 00:40:59.390550 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:40:59.390969 | orchestrator | 2025-03-27 00:40:59.391940 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:40:59.392793 | orchestrator | Thursday 27 March 2025 00:40:59 +0000 (0:00:00.109) 0:00:01.496 ******** 2025-03-27 00:41:00.121436 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:41:00.121672 | orchestrator | 2025-03-27 00:41:00.124191 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:41:00.125572 | orchestrator | Thursday 27 March 2025 00:41:00 +0000 (0:00:00.727) 0:00:02.224 ******** 2025-03-27 00:41:00.237255 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:41:00.240206 | orchestrator | 2025-03-27 00:41:00.241450 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:41:00.241488 | orchestrator | 2025-03-27 00:41:00.242417 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:41:00.243476 | orchestrator | Thursday 27 March 2025 00:41:00 +0000 (0:00:00.116) 0:00:02.340 ******** 2025-03-27 00:41:00.341063 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:41:00.341358 | orchestrator | 2025-03-27 00:41:00.343403 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:41:01.176309 | orchestrator | Thursday 27 March 2025 00:41:00 +0000 (0:00:00.105) 0:00:02.446 ******** 2025-03-27 00:41:01.176472 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:41:01.333371 | orchestrator | 2025-03-27 00:41:01.333475 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:41:01.333494 | orchestrator | Thursday 27 March 2025 00:41:01 +0000 (0:00:00.832) 0:00:03.278 ******** 2025-03-27 00:41:01.333525 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:41:01.334505 | orchestrator | 2025-03-27 00:41:01.335500 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:41:01.338650 | orchestrator | 2025-03-27 00:41:01.450982 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:41:01.451066 | orchestrator | Thursday 27 March 2025 00:41:01 +0000 (0:00:00.156) 0:00:03.435 ******** 2025-03-27 00:41:01.451093 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:41:01.452605 | orchestrator | 2025-03-27 00:41:01.453311 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:41:01.455607 | orchestrator | Thursday 27 March 2025 00:41:01 +0000 (0:00:00.121) 0:00:03.557 ******** 2025-03-27 00:41:02.160507 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:41:02.162508 | orchestrator | 2025-03-27 00:41:02.163370 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:41:02.164163 | orchestrator | Thursday 27 March 2025 00:41:02 +0000 (0:00:00.707) 0:00:04.264 ******** 2025-03-27 00:41:02.300286 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:41:02.300876 | orchestrator | 2025-03-27 00:41:02.301959 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:41:02.302564 | orchestrator | 2025-03-27 00:41:02.303512 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:41:02.304294 | orchestrator | Thursday 27 March 2025 00:41:02 +0000 (0:00:00.136) 0:00:04.400 ******** 2025-03-27 00:41:02.406269 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:41:02.407349 | orchestrator | 2025-03-27 00:41:02.408329 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:41:02.408360 | orchestrator | Thursday 27 March 2025 00:41:02 +0000 (0:00:00.110) 0:00:04.511 ******** 2025-03-27 00:41:03.126651 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:41:03.128135 | orchestrator | 2025-03-27 00:41:03.129972 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:41:03.131411 | orchestrator | Thursday 27 March 2025 00:41:03 +0000 (0:00:00.719) 0:00:05.230 ******** 2025-03-27 00:41:03.253336 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:41:03.253853 | orchestrator | 2025-03-27 00:41:03.256114 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-27 00:41:03.256238 | orchestrator | 2025-03-27 00:41:03.256954 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-27 00:41:03.257533 | orchestrator | Thursday 27 March 2025 00:41:03 +0000 (0:00:00.124) 0:00:05.354 ******** 2025-03-27 00:41:03.363138 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:41:03.364428 | orchestrator | 2025-03-27 00:41:03.366093 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-27 00:41:03.366807 | orchestrator | Thursday 27 March 2025 00:41:03 +0000 (0:00:00.113) 0:00:05.468 ******** 2025-03-27 00:41:04.109943 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:41:04.110304 | orchestrator | 2025-03-27 00:41:04.110723 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-27 00:41:04.111358 | orchestrator | Thursday 27 March 2025 00:41:04 +0000 (0:00:00.747) 0:00:06.215 ******** 2025-03-27 00:41:04.155132 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:41:04.156643 | orchestrator | 2025-03-27 00:41:04.158494 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:41:04.158929 | orchestrator | 2025-03-27 00:41:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:41:04.161714 | orchestrator | 2025-03-27 00:41:04 | INFO  | Please wait and do not abort execution. 2025-03-27 00:41:04.161776 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.163628 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.166501 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.167806 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.169132 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.169751 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:41:04.170967 | orchestrator | 2025-03-27 00:41:04.171746 | orchestrator | Thursday 27 March 2025 00:41:04 +0000 (0:00:00.043) 0:00:06.259 ******** 2025-03-27 00:41:04.172510 | orchestrator | =============================================================================== 2025-03-27 00:41:04.173261 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.72s 2025-03-27 00:41:04.174240 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2025-03-27 00:41:04.174832 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2025-03-27 00:41:04.738273 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-27 00:41:06.530560 | orchestrator | 2025-03-27 00:41:06 | INFO  | Task d4411a07-0585-4f5b-9982-4ff75b55b6d2 (wait-for-connection) was prepared for execution. 2025-03-27 00:41:09.816586 | orchestrator | 2025-03-27 00:41:06 | INFO  | It takes a moment until task d4411a07-0585-4f5b-9982-4ff75b55b6d2 (wait-for-connection) has been started and output is visible here. 2025-03-27 00:41:09.816729 | orchestrator | 2025-03-27 00:41:09.817686 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-27 00:41:09.817723 | orchestrator | 2025-03-27 00:41:09.818342 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-27 00:41:09.818993 | orchestrator | Thursday 27 March 2025 00:41:09 +0000 (0:00:00.203) 0:00:00.203 ******** 2025-03-27 00:41:23.264719 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:41:23.264905 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:41:23.264932 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:41:23.264948 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:41:23.264964 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:41:23.264985 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:41:23.266771 | orchestrator | 2025-03-27 00:41:23.267057 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:41:23.267645 | orchestrator | 2025-03-27 00:41:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:41:23.268328 | orchestrator | 2025-03-27 00:41:23 | INFO  | Please wait and do not abort execution. 2025-03-27 00:41:23.268364 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.268969 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.269835 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.269939 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.270601 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.271290 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:23.271486 | orchestrator | 2025-03-27 00:41:23.271911 | orchestrator | Thursday 27 March 2025 00:41:23 +0000 (0:00:13.445) 0:00:13.649 ******** 2025-03-27 00:41:23.272590 | orchestrator | =============================================================================== 2025-03-27 00:41:23.867894 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.45s 2025-03-27 00:41:23.868037 | orchestrator | + osism apply hddtemp 2025-03-27 00:41:25.289052 | orchestrator | 2025-03-27 00:41:25 | INFO  | Task c21ebd05-c922-48ba-85f2-f29f0efd41c1 (hddtemp) was prepared for execution. 2025-03-27 00:41:28.399607 | orchestrator | 2025-03-27 00:41:25 | INFO  | It takes a moment until task c21ebd05-c922-48ba-85f2-f29f0efd41c1 (hddtemp) has been started and output is visible here. 2025-03-27 00:41:28.399751 | orchestrator | 2025-03-27 00:41:28.399911 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-27 00:41:28.399945 | orchestrator | 2025-03-27 00:41:28.400687 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-27 00:41:28.402361 | orchestrator | Thursday 27 March 2025 00:41:28 +0000 (0:00:00.222) 0:00:00.222 ******** 2025-03-27 00:41:28.562643 | orchestrator | ok: [testbed-manager] 2025-03-27 00:41:28.660643 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:41:28.738688 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:41:28.816694 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:41:28.897330 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:41:29.147629 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:41:29.149002 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:41:29.152099 | orchestrator | 2025-03-27 00:41:29.157487 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-27 00:41:30.397385 | orchestrator | Thursday 27 March 2025 00:41:29 +0000 (0:00:00.748) 0:00:00.970 ******** 2025-03-27 00:41:30.397511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:41:30.401014 | orchestrator | 2025-03-27 00:41:30.401056 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-27 00:41:32.594215 | orchestrator | Thursday 27 March 2025 00:41:30 +0000 (0:00:01.246) 0:00:02.216 ******** 2025-03-27 00:41:32.594354 | orchestrator | ok: [testbed-manager] 2025-03-27 00:41:32.594686 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:41:32.597947 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:41:32.599068 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:41:32.599807 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:41:32.601233 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:41:32.602131 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:41:32.602736 | orchestrator | 2025-03-27 00:41:32.602766 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-27 00:41:32.603717 | orchestrator | Thursday 27 March 2025 00:41:32 +0000 (0:00:02.199) 0:00:04.416 ******** 2025-03-27 00:41:33.233039 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:41:33.327361 | orchestrator | changed: [testbed-manager] 2025-03-27 00:41:33.813489 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:41:33.816121 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:41:33.816861 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:41:33.816896 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:41:33.818194 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:41:33.818902 | orchestrator | 2025-03-27 00:41:33.819609 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-27 00:41:33.820582 | orchestrator | Thursday 27 March 2025 00:41:33 +0000 (0:00:01.215) 0:00:05.632 ******** 2025-03-27 00:41:35.274719 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:41:35.278253 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:41:35.279010 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:41:35.279039 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:41:35.279056 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:41:35.279077 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:41:35.279734 | orchestrator | ok: [testbed-manager] 2025-03-27 00:41:35.280410 | orchestrator | 2025-03-27 00:41:35.281342 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-27 00:41:35.281759 | orchestrator | Thursday 27 March 2025 00:41:35 +0000 (0:00:01.461) 0:00:07.094 ******** 2025-03-27 00:41:35.533397 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:41:35.626316 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:41:35.711750 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:41:35.794126 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:41:35.921361 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:41:35.923577 | orchestrator | changed: [testbed-manager] 2025-03-27 00:41:35.924475 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:41:35.925326 | orchestrator | 2025-03-27 00:41:35.926896 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-27 00:41:35.927712 | orchestrator | Thursday 27 March 2025 00:41:35 +0000 (0:00:00.651) 0:00:07.745 ******** 2025-03-27 00:41:50.331282 | orchestrator | changed: [testbed-manager] 2025-03-27 00:41:50.331484 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:41:50.331508 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:41:50.331524 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:41:50.331538 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:41:50.331558 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:41:50.332946 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:41:50.333792 | orchestrator | 2025-03-27 00:41:50.334512 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-27 00:41:50.335208 | orchestrator | Thursday 27 March 2025 00:41:50 +0000 (0:00:14.399) 0:00:22.145 ******** 2025-03-27 00:41:51.620001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:41:51.624210 | orchestrator | 2025-03-27 00:41:53.540003 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-27 00:41:53.540116 | orchestrator | Thursday 27 March 2025 00:41:51 +0000 (0:00:01.294) 0:00:23.439 ******** 2025-03-27 00:41:53.540149 | orchestrator | changed: [testbed-manager] 2025-03-27 00:41:53.541037 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:41:53.542160 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:41:53.543007 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:41:53.545417 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:41:53.545741 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:41:53.547344 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:41:53.548306 | orchestrator | 2025-03-27 00:41:53.549652 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:41:53.550005 | orchestrator | 2025-03-27 00:41:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:41:53.551319 | orchestrator | 2025-03-27 00:41:53 | INFO  | Please wait and do not abort execution. 2025-03-27 00:41:53.551355 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:41:53.551765 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.552509 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.553250 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.553887 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.554307 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.555268 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:41:53.555357 | orchestrator | 2025-03-27 00:41:53.555886 | orchestrator | Thursday 27 March 2025 00:41:53 +0000 (0:00:01.923) 0:00:25.363 ******** 2025-03-27 00:41:53.556344 | orchestrator | =============================================================================== 2025-03-27 00:41:53.556933 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.40s 2025-03-27 00:41:53.557886 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.20s 2025-03-27 00:41:53.558304 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-03-27 00:41:53.558806 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.46s 2025-03-27 00:41:53.559306 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2025-03-27 00:41:53.559855 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-03-27 00:41:53.561104 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2025-03-27 00:41:53.562263 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-03-27 00:41:53.563270 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2025-03-27 00:41:54.216768 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-27 00:41:55.690344 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-27 00:41:55.691264 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-27 00:41:55.691294 | orchestrator | + local max_attempts=60 2025-03-27 00:41:55.691309 | orchestrator | + local name=ceph-ansible 2025-03-27 00:41:55.691324 | orchestrator | + local attempt_num=1 2025-03-27 00:41:55.691345 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-27 00:41:55.733866 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:41:55.734565 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-27 00:41:55.734592 | orchestrator | + local max_attempts=60 2025-03-27 00:41:55.734608 | orchestrator | + local name=kolla-ansible 2025-03-27 00:41:55.734624 | orchestrator | + local attempt_num=1 2025-03-27 00:41:55.734645 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-27 00:41:55.761852 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:41:55.762693 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-27 00:41:55.762718 | orchestrator | + local max_attempts=60 2025-03-27 00:41:55.762734 | orchestrator | + local name=osism-ansible 2025-03-27 00:41:55.762750 | orchestrator | + local attempt_num=1 2025-03-27 00:41:55.762770 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-27 00:41:55.795022 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-27 00:41:55.975402 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-27 00:41:55.975462 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-27 00:41:55.975487 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-27 00:41:56.146685 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-27 00:41:56.340002 | orchestrator | ARA in osism-ansible already disabled. 2025-03-27 00:41:56.524263 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-27 00:41:56.525278 | orchestrator | + osism apply gather-facts 2025-03-27 00:41:58.137230 | orchestrator | 2025-03-27 00:41:58 | INFO  | Task 0381e736-ff42-409f-b47c-75890dc0739b (gather-facts) was prepared for execution. 2025-03-27 00:42:01.512439 | orchestrator | 2025-03-27 00:41:58 | INFO  | It takes a moment until task 0381e736-ff42-409f-b47c-75890dc0739b (gather-facts) has been started and output is visible here. 2025-03-27 00:42:01.512615 | orchestrator | 2025-03-27 00:42:01.512888 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-27 00:42:01.517080 | orchestrator | 2025-03-27 00:42:01.519079 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:42:01.520254 | orchestrator | Thursday 27 March 2025 00:42:01 +0000 (0:00:00.199) 0:00:00.200 ******** 2025-03-27 00:42:06.987286 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:42:06.988047 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:42:06.988663 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:42:06.990157 | orchestrator | ok: [testbed-manager] 2025-03-27 00:42:06.991415 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:42:06.992696 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:42:06.994356 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:42:06.994492 | orchestrator | 2025-03-27 00:42:06.994514 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-27 00:42:06.994532 | orchestrator | 2025-03-27 00:42:06.995887 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-27 00:42:06.996816 | orchestrator | Thursday 27 March 2025 00:42:06 +0000 (0:00:05.474) 0:00:05.675 ******** 2025-03-27 00:42:07.166113 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:42:07.247432 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:42:07.340419 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:42:07.424736 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:42:07.507617 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:42:07.556046 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:42:07.556709 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:42:07.557410 | orchestrator | 2025-03-27 00:42:07.558131 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:42:07.559411 | orchestrator | 2025-03-27 00:42:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:42:07.560346 | orchestrator | 2025-03-27 00:42:07 | INFO  | Please wait and do not abort execution. 2025-03-27 00:42:07.560380 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.561059 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.561744 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.562551 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.563238 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.563869 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.564377 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 00:42:07.565078 | orchestrator | 2025-03-27 00:42:07.565617 | orchestrator | Thursday 27 March 2025 00:42:07 +0000 (0:00:00.571) 0:00:06.246 ******** 2025-03-27 00:42:07.566414 | orchestrator | =============================================================================== 2025-03-27 00:42:07.566700 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2025-03-27 00:42:07.567343 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-03-27 00:42:08.204553 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-27 00:42:08.224060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-27 00:42:08.244871 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-27 00:42:08.259926 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-27 00:42:08.276848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-27 00:42:08.290785 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-27 00:42:08.311410 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-27 00:42:08.326919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-27 00:42:08.343627 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-27 00:42:08.357034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-27 00:42:08.372928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-27 00:42:08.389766 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-27 00:42:08.404569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-27 00:42:08.421320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-27 00:42:08.439239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-27 00:42:08.457058 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-27 00:42:08.476226 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-27 00:42:08.495881 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-27 00:42:08.514222 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-27 00:42:08.534745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-27 00:42:08.559018 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-27 00:42:08.990315 | orchestrator | changed 2025-03-27 00:42:09.051792 | 2025-03-27 00:42:09.051898 | TASK [Deploy services] 2025-03-27 00:42:09.169022 | orchestrator | skipping: Conditional result was False 2025-03-27 00:42:09.186898 | 2025-03-27 00:42:09.187022 | TASK [Deploy in a nutshell] 2025-03-27 00:42:09.935177 | orchestrator | 2025-03-27 00:42:10.014231 | orchestrator | # PULL IMAGES 2025-03-27 00:42:10.014324 | orchestrator | 2025-03-27 00:42:10.014343 | orchestrator | + set -e 2025-03-27 00:42:10.014391 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-27 00:42:10.014415 | orchestrator | ++ export INTERACTIVE=false 2025-03-27 00:42:10.014432 | orchestrator | ++ INTERACTIVE=false 2025-03-27 00:42:10.014455 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-27 00:42:10.014479 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-27 00:42:10.014495 | orchestrator | + source /opt/manager-vars.sh 2025-03-27 00:42:10.014509 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-27 00:42:10.014523 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-27 00:42:10.014537 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-27 00:42:10.014550 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-27 00:42:10.014564 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-27 00:42:10.014578 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-27 00:42:10.014593 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-27 00:42:10.014607 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-27 00:42:10.014621 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-27 00:42:10.014635 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-27 00:42:10.014648 | orchestrator | ++ export ARA=false 2025-03-27 00:42:10.014662 | orchestrator | ++ ARA=false 2025-03-27 00:42:10.014676 | orchestrator | ++ export TEMPEST=false 2025-03-27 00:42:10.014689 | orchestrator | ++ TEMPEST=false 2025-03-27 00:42:10.014704 | orchestrator | ++ export IS_ZUUL=true 2025-03-27 00:42:10.014717 | orchestrator | ++ IS_ZUUL=true 2025-03-27 00:42:10.014731 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:42:10.014745 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.178 2025-03-27 00:42:10.014759 | orchestrator | ++ export EXTERNAL_API=false 2025-03-27 00:42:10.014773 | orchestrator | ++ EXTERNAL_API=false 2025-03-27 00:42:10.014786 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-27 00:42:10.014800 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-27 00:42:10.014826 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-27 00:42:10.014840 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-27 00:42:10.014854 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-27 00:42:10.014867 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-27 00:42:10.014881 | orchestrator | + echo 2025-03-27 00:42:10.014895 | orchestrator | + echo '# PULL IMAGES' 2025-03-27 00:42:10.014909 | orchestrator | + echo 2025-03-27 00:42:10.014922 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-27 00:42:10.014958 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-27 00:42:11.562454 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-27 00:42:11.562614 | orchestrator | 2025-03-27 00:42:11 | INFO  | Trying to run play pull-images in environment custom 2025-03-27 00:42:11.612830 | orchestrator | 2025-03-27 00:42:11 | INFO  | Task fec7e33d-c434-4660-80fd-231283653004 (pull-images) was prepared for execution. 2025-03-27 00:42:14.821745 | orchestrator | 2025-03-27 00:42:11 | INFO  | It takes a moment until task fec7e33d-c434-4660-80fd-231283653004 (pull-images) has been started and output is visible here. 2025-03-27 00:42:14.821862 | orchestrator | 2025-03-27 00:42:14.822006 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-27 00:42:14.824560 | orchestrator | 2025-03-27 00:42:14.824647 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-27 00:42:56.110850 | orchestrator | Thursday 27 March 2025 00:42:14 +0000 (0:00:00.150) 0:00:00.150 ******** 2025-03-27 00:42:56.111019 | orchestrator | changed: [testbed-manager] 2025-03-27 00:43:47.693239 | orchestrator | 2025-03-27 00:43:47.693397 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-27 00:43:47.693421 | orchestrator | Thursday 27 March 2025 00:42:56 +0000 (0:00:41.287) 0:00:41.437 ******** 2025-03-27 00:43:47.693456 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-27 00:43:47.693886 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-27 00:43:47.694004 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-27 00:43:47.694081 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-27 00:43:47.694121 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-27 00:43:47.694139 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-27 00:43:47.694154 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-27 00:43:47.694217 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-27 00:43:47.694278 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-27 00:43:47.694616 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-27 00:43:47.697838 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-27 00:43:47.698452 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-27 00:43:47.698492 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-27 00:43:47.698517 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-27 00:43:47.699046 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-27 00:43:47.700396 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-27 00:43:47.701264 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-27 00:43:47.702091 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-27 00:43:47.702841 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-27 00:43:47.703434 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-27 00:43:47.703982 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-27 00:43:47.704968 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-27 00:43:47.705276 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-27 00:43:47.705848 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-27 00:43:47.706432 | orchestrator | 2025-03-27 00:43:47.709159 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:43:47.710714 | orchestrator | 2025-03-27 00:43:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:43:47.712092 | orchestrator | 2025-03-27 00:43:47 | INFO  | Please wait and do not abort execution. 2025-03-27 00:43:47.712123 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:43:47.713195 | orchestrator | 2025-03-27 00:43:47.714084 | orchestrator | Thursday 27 March 2025 00:43:47 +0000 (0:00:51.578) 0:01:33.016 ******** 2025-03-27 00:43:47.715078 | orchestrator | =============================================================================== 2025-03-27 00:43:47.715533 | orchestrator | Pull other images ------------------------------------------------------ 51.58s 2025-03-27 00:43:47.716913 | orchestrator | Pull keystone image ---------------------------------------------------- 41.29s 2025-03-27 00:43:50.179390 | orchestrator | 2025-03-27 00:43:50 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-27 00:43:50.230619 | orchestrator | 2025-03-27 00:43:50 | INFO  | Task 34c15104-10e1-4ee6-b8da-62f6c20c85af (wipe-partitions) was prepared for execution. 2025-03-27 00:43:53.539285 | orchestrator | 2025-03-27 00:43:50 | INFO  | It takes a moment until task 34c15104-10e1-4ee6-b8da-62f6c20c85af (wipe-partitions) has been started and output is visible here. 2025-03-27 00:43:53.539423 | orchestrator | 2025-03-27 00:43:53.541924 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-27 00:43:53.542359 | orchestrator | 2025-03-27 00:43:53.543303 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-27 00:43:53.543615 | orchestrator | Thursday 27 March 2025 00:43:53 +0000 (0:00:00.126) 0:00:00.126 ******** 2025-03-27 00:43:54.181728 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:43:54.182800 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:43:54.182843 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:43:54.183027 | orchestrator | 2025-03-27 00:43:54.183484 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-27 00:43:54.184721 | orchestrator | Thursday 27 March 2025 00:43:54 +0000 (0:00:00.641) 0:00:00.768 ******** 2025-03-27 00:43:54.378462 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:43:54.469901 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:43:54.471072 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:43:54.472069 | orchestrator | 2025-03-27 00:43:54.472801 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-27 00:43:54.477120 | orchestrator | Thursday 27 March 2025 00:43:54 +0000 (0:00:00.287) 0:00:01.056 ******** 2025-03-27 00:43:55.313143 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:43:55.313730 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:43:55.313772 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:43:55.315201 | orchestrator | 2025-03-27 00:43:55.315282 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-27 00:43:55.315932 | orchestrator | Thursday 27 March 2025 00:43:55 +0000 (0:00:00.840) 0:00:01.896 ******** 2025-03-27 00:43:55.501241 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:43:55.594112 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:43:55.594269 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:43:55.594293 | orchestrator | 2025-03-27 00:43:55.595461 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-27 00:43:55.596639 | orchestrator | Thursday 27 March 2025 00:43:55 +0000 (0:00:00.284) 0:00:02.180 ******** 2025-03-27 00:43:56.861682 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-27 00:43:56.864681 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-27 00:43:56.869755 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-27 00:43:56.869781 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-27 00:43:56.869797 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-27 00:43:56.869810 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-27 00:43:56.869853 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-27 00:43:56.869871 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-27 00:43:56.869939 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-27 00:43:56.870608 | orchestrator | 2025-03-27 00:43:56.871035 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-27 00:43:56.871274 | orchestrator | Thursday 27 March 2025 00:43:56 +0000 (0:00:01.266) 0:00:03.447 ******** 2025-03-27 00:43:58.361552 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-27 00:43:58.362153 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-27 00:43:58.362647 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-27 00:43:58.363030 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-27 00:43:58.364569 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-27 00:43:58.365020 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-27 00:43:58.365194 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-27 00:43:58.365224 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-27 00:43:58.370992 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-27 00:44:00.881329 | orchestrator | 2025-03-27 00:44:00.881450 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-27 00:44:00.881469 | orchestrator | Thursday 27 March 2025 00:43:58 +0000 (0:00:01.496) 0:00:04.944 ******** 2025-03-27 00:44:00.881500 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-27 00:44:00.882368 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-27 00:44:00.882473 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-27 00:44:00.882851 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-27 00:44:00.885055 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-27 00:44:00.885405 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-27 00:44:00.885860 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-27 00:44:00.886213 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-27 00:44:00.886667 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-27 00:44:00.888038 | orchestrator | 2025-03-27 00:44:00.888348 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-27 00:44:00.891192 | orchestrator | Thursday 27 March 2025 00:44:00 +0000 (0:00:02.522) 0:00:07.466 ******** 2025-03-27 00:44:01.556368 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:44:01.556672 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:44:01.556777 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:44:01.557581 | orchestrator | 2025-03-27 00:44:01.557675 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-27 00:44:01.558132 | orchestrator | Thursday 27 March 2025 00:44:01 +0000 (0:00:00.677) 0:00:08.144 ******** 2025-03-27 00:44:02.269944 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:44:02.270203 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:44:02.270229 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:44:02.270248 | orchestrator | 2025-03-27 00:44:02.270589 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:44:02.271184 | orchestrator | 2025-03-27 00:44:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:44:02.274115 | orchestrator | 2025-03-27 00:44:02 | INFO  | Please wait and do not abort execution. 2025-03-27 00:44:02.274147 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:02.274372 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:02.274727 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:02.275179 | orchestrator | 2025-03-27 00:44:02.280238 | orchestrator | Thursday 27 March 2025 00:44:02 +0000 (0:00:00.709) 0:00:08.853 ******** 2025-03-27 00:44:02.283476 | orchestrator | =============================================================================== 2025-03-27 00:44:02.283540 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.52s 2025-03-27 00:44:02.283963 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.50s 2025-03-27 00:44:02.287323 | orchestrator | Check device availability ----------------------------------------------- 1.27s 2025-03-27 00:44:02.287592 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.84s 2025-03-27 00:44:02.287619 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2025-03-27 00:44:02.288298 | orchestrator | Reload udev rules ------------------------------------------------------- 0.68s 2025-03-27 00:44:02.290960 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2025-03-27 00:44:02.296421 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2025-03-27 00:44:02.301833 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-03-27 00:44:04.493500 | orchestrator | 2025-03-27 00:44:04 | INFO  | Task 16c5e69a-a8dc-4782-a5ea-312c525ceec6 (facts) was prepared for execution. 2025-03-27 00:44:04.495150 | orchestrator | 2025-03-27 00:44:04 | INFO  | It takes a moment until task 16c5e69a-a8dc-4782-a5ea-312c525ceec6 (facts) has been started and output is visible here. 2025-03-27 00:44:08.582070 | orchestrator | 2025-03-27 00:44:08.584857 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-27 00:44:08.585657 | orchestrator | 2025-03-27 00:44:08.585685 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-27 00:44:08.585706 | orchestrator | Thursday 27 March 2025 00:44:08 +0000 (0:00:00.288) 0:00:00.288 ******** 2025-03-27 00:44:09.175607 | orchestrator | ok: [testbed-manager] 2025-03-27 00:44:09.815072 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:44:09.815323 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:44:09.816605 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:44:09.817702 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:09.818812 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:09.820493 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:44:09.821734 | orchestrator | 2025-03-27 00:44:09.823058 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-27 00:44:09.823924 | orchestrator | Thursday 27 March 2025 00:44:09 +0000 (0:00:01.230) 0:00:01.519 ******** 2025-03-27 00:44:09.985271 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:44:10.079427 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:44:10.206508 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:44:10.340010 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:44:10.428623 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:11.351459 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:11.353807 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:11.354949 | orchestrator | 2025-03-27 00:44:11.355749 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-27 00:44:11.355995 | orchestrator | 2025-03-27 00:44:11.356619 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:44:11.357545 | orchestrator | Thursday 27 March 2025 00:44:11 +0000 (0:00:01.534) 0:00:03.053 ******** 2025-03-27 00:44:16.453812 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:44:16.457247 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:44:16.458361 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:44:16.461019 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:16.461558 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:44:16.462393 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:16.463580 | orchestrator | ok: [testbed-manager] 2025-03-27 00:44:16.464472 | orchestrator | 2025-03-27 00:44:16.465033 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-27 00:44:16.465930 | orchestrator | 2025-03-27 00:44:16.466670 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-27 00:44:16.467752 | orchestrator | Thursday 27 March 2025 00:44:16 +0000 (0:00:05.111) 0:00:08.164 ******** 2025-03-27 00:44:16.733330 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:44:16.829504 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:44:16.931120 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:44:17.032151 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:44:17.120798 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:17.162072 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:17.163353 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:17.165520 | orchestrator | 2025-03-27 00:44:17.166681 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:44:17.166718 | orchestrator | 2025-03-27 00:44:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:44:17.168194 | orchestrator | 2025-03-27 00:44:17 | INFO  | Please wait and do not abort execution. 2025-03-27 00:44:17.168237 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.168978 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.169432 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.170505 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.171078 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.171108 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.171587 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:44:17.172532 | orchestrator | 2025-03-27 00:44:17.172790 | orchestrator | Thursday 27 March 2025 00:44:17 +0000 (0:00:00.706) 0:00:08.871 ******** 2025-03-27 00:44:17.173445 | orchestrator | =============================================================================== 2025-03-27 00:44:17.173862 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2025-03-27 00:44:17.174106 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.53s 2025-03-27 00:44:17.174681 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.23s 2025-03-27 00:44:17.175114 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2025-03-27 00:44:18.951416 | orchestrator | 2025-03-27 00:44:18 | INFO  | Task 28e5eaa0-5720-46e5-8fd3-2718355fadeb (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-27 00:44:22.784025 | orchestrator | 2025-03-27 00:44:18 | INFO  | It takes a moment until task 28e5eaa0-5720-46e5-8fd3-2718355fadeb (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-27 00:44:22.784159 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 00:44:23.371690 | orchestrator | 2025-03-27 00:44:23.374245 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-27 00:44:23.375625 | orchestrator | 2025-03-27 00:44:23.649120 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:44:23.649211 | orchestrator | Thursday 27 March 2025 00:44:23 +0000 (0:00:00.503) 0:00:00.503 ******** 2025-03-27 00:44:23.649237 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-27 00:44:23.650380 | orchestrator | 2025-03-27 00:44:23.653729 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:44:23.654420 | orchestrator | Thursday 27 March 2025 00:44:23 +0000 (0:00:00.284) 0:00:00.788 ******** 2025-03-27 00:44:23.907961 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:23.912135 | orchestrator | 2025-03-27 00:44:23.913684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:23.913897 | orchestrator | Thursday 27 March 2025 00:44:23 +0000 (0:00:00.258) 0:00:01.046 ******** 2025-03-27 00:44:24.445857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-27 00:44:24.446497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-27 00:44:24.447392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-27 00:44:24.448859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-27 00:44:24.450653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-27 00:44:24.451826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-27 00:44:24.452958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-27 00:44:24.454461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-27 00:44:24.455564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-27 00:44:24.456671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-27 00:44:24.457544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-27 00:44:24.457639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-27 00:44:24.458322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-27 00:44:24.458784 | orchestrator | 2025-03-27 00:44:24.459387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:24.460326 | orchestrator | Thursday 27 March 2025 00:44:24 +0000 (0:00:00.540) 0:00:01.586 ******** 2025-03-27 00:44:24.657723 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:24.659317 | orchestrator | 2025-03-27 00:44:24.660345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:24.661115 | orchestrator | Thursday 27 March 2025 00:44:24 +0000 (0:00:00.210) 0:00:01.797 ******** 2025-03-27 00:44:24.877414 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:24.877617 | orchestrator | 2025-03-27 00:44:24.877646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:24.878922 | orchestrator | Thursday 27 March 2025 00:44:24 +0000 (0:00:00.217) 0:00:02.015 ******** 2025-03-27 00:44:25.112603 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:25.113191 | orchestrator | 2025-03-27 00:44:25.113229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:25.113438 | orchestrator | Thursday 27 March 2025 00:44:25 +0000 (0:00:00.236) 0:00:02.252 ******** 2025-03-27 00:44:25.375541 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:25.375872 | orchestrator | 2025-03-27 00:44:25.376299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:25.377400 | orchestrator | Thursday 27 March 2025 00:44:25 +0000 (0:00:00.262) 0:00:02.514 ******** 2025-03-27 00:44:25.589366 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:25.793998 | orchestrator | 2025-03-27 00:44:25.794099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:25.794116 | orchestrator | Thursday 27 March 2025 00:44:25 +0000 (0:00:00.212) 0:00:02.726 ******** 2025-03-27 00:44:25.794140 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:25.794577 | orchestrator | 2025-03-27 00:44:25.797316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:25.799642 | orchestrator | Thursday 27 March 2025 00:44:25 +0000 (0:00:00.208) 0:00:02.934 ******** 2025-03-27 00:44:26.042112 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:26.042577 | orchestrator | 2025-03-27 00:44:26.044152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:26.044218 | orchestrator | Thursday 27 March 2025 00:44:26 +0000 (0:00:00.244) 0:00:03.179 ******** 2025-03-27 00:44:26.297131 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:26.297797 | orchestrator | 2025-03-27 00:44:26.297833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:26.300886 | orchestrator | Thursday 27 March 2025 00:44:26 +0000 (0:00:00.258) 0:00:03.437 ******** 2025-03-27 00:44:27.037652 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21) 2025-03-27 00:44:27.038011 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21) 2025-03-27 00:44:27.038332 | orchestrator | 2025-03-27 00:44:27.038575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:27.038989 | orchestrator | Thursday 27 March 2025 00:44:27 +0000 (0:00:00.736) 0:00:04.174 ******** 2025-03-27 00:44:28.063691 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9) 2025-03-27 00:44:28.063842 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9) 2025-03-27 00:44:28.065315 | orchestrator | 2025-03-27 00:44:28.066243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:28.069220 | orchestrator | Thursday 27 March 2025 00:44:28 +0000 (0:00:01.027) 0:00:05.202 ******** 2025-03-27 00:44:28.558253 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7) 2025-03-27 00:44:28.560764 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7) 2025-03-27 00:44:28.561005 | orchestrator | 2025-03-27 00:44:28.561099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:28.561411 | orchestrator | Thursday 27 March 2025 00:44:28 +0000 (0:00:00.496) 0:00:05.698 ******** 2025-03-27 00:44:29.131332 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23) 2025-03-27 00:44:29.134587 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23) 2025-03-27 00:44:29.144048 | orchestrator | 2025-03-27 00:44:29.502616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:29.502727 | orchestrator | Thursday 27 March 2025 00:44:29 +0000 (0:00:00.565) 0:00:06.264 ******** 2025-03-27 00:44:29.502759 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:44:29.503299 | orchestrator | 2025-03-27 00:44:29.503335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:29.503619 | orchestrator | Thursday 27 March 2025 00:44:29 +0000 (0:00:00.379) 0:00:06.643 ******** 2025-03-27 00:44:30.120417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-27 00:44:30.120665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-27 00:44:30.120709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-27 00:44:30.120734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-27 00:44:30.120753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-27 00:44:30.121992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-27 00:44:30.122122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-27 00:44:30.122217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-27 00:44:30.122451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-27 00:44:30.122557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-27 00:44:30.125269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-27 00:44:30.127962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-27 00:44:30.128062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-27 00:44:30.128325 | orchestrator | 2025-03-27 00:44:30.128526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:30.129343 | orchestrator | Thursday 27 March 2025 00:44:30 +0000 (0:00:00.615) 0:00:07.258 ******** 2025-03-27 00:44:30.355856 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:30.356004 | orchestrator | 2025-03-27 00:44:30.356336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:30.356366 | orchestrator | Thursday 27 March 2025 00:44:30 +0000 (0:00:00.237) 0:00:07.495 ******** 2025-03-27 00:44:30.571556 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:30.572638 | orchestrator | 2025-03-27 00:44:30.573908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:30.574228 | orchestrator | Thursday 27 March 2025 00:44:30 +0000 (0:00:00.216) 0:00:07.712 ******** 2025-03-27 00:44:30.781475 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:30.781914 | orchestrator | 2025-03-27 00:44:30.782266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:30.783471 | orchestrator | Thursday 27 March 2025 00:44:30 +0000 (0:00:00.211) 0:00:07.923 ******** 2025-03-27 00:44:31.008595 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:31.008717 | orchestrator | 2025-03-27 00:44:31.008736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:31.008757 | orchestrator | Thursday 27 March 2025 00:44:31 +0000 (0:00:00.224) 0:00:08.147 ******** 2025-03-27 00:44:31.713806 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:31.715328 | orchestrator | 2025-03-27 00:44:31.715767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:31.717797 | orchestrator | Thursday 27 March 2025 00:44:31 +0000 (0:00:00.707) 0:00:08.855 ******** 2025-03-27 00:44:31.964114 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:31.969496 | orchestrator | 2025-03-27 00:44:32.275905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:32.275975 | orchestrator | Thursday 27 March 2025 00:44:31 +0000 (0:00:00.247) 0:00:09.102 ******** 2025-03-27 00:44:32.276001 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:32.279637 | orchestrator | 2025-03-27 00:44:32.533388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:32.533434 | orchestrator | Thursday 27 March 2025 00:44:32 +0000 (0:00:00.311) 0:00:09.414 ******** 2025-03-27 00:44:32.533454 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:32.534489 | orchestrator | 2025-03-27 00:44:32.535754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:32.537039 | orchestrator | Thursday 27 March 2025 00:44:32 +0000 (0:00:00.255) 0:00:09.670 ******** 2025-03-27 00:44:33.362091 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-27 00:44:33.363207 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-27 00:44:33.364706 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-27 00:44:33.365851 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-27 00:44:33.367288 | orchestrator | 2025-03-27 00:44:33.368642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:33.369560 | orchestrator | Thursday 27 March 2025 00:44:33 +0000 (0:00:00.829) 0:00:10.499 ******** 2025-03-27 00:44:33.568586 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:33.572019 | orchestrator | 2025-03-27 00:44:33.573145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:33.808315 | orchestrator | Thursday 27 March 2025 00:44:33 +0000 (0:00:00.205) 0:00:10.705 ******** 2025-03-27 00:44:33.808363 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:33.810008 | orchestrator | 2025-03-27 00:44:33.810083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:33.812388 | orchestrator | Thursday 27 March 2025 00:44:33 +0000 (0:00:00.241) 0:00:10.946 ******** 2025-03-27 00:44:34.026642 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:34.028453 | orchestrator | 2025-03-27 00:44:34.029592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:34.029951 | orchestrator | Thursday 27 March 2025 00:44:34 +0000 (0:00:00.217) 0:00:11.164 ******** 2025-03-27 00:44:34.273227 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:34.275374 | orchestrator | 2025-03-27 00:44:34.276391 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-27 00:44:34.277376 | orchestrator | Thursday 27 March 2025 00:44:34 +0000 (0:00:00.248) 0:00:11.412 ******** 2025-03-27 00:44:34.481483 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-27 00:44:34.490131 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-27 00:44:34.637388 | orchestrator | 2025-03-27 00:44:34.637421 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-27 00:44:34.637436 | orchestrator | Thursday 27 March 2025 00:44:34 +0000 (0:00:00.209) 0:00:11.622 ******** 2025-03-27 00:44:34.637456 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:34.637935 | orchestrator | 2025-03-27 00:44:34.640807 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-27 00:44:34.641001 | orchestrator | Thursday 27 March 2025 00:44:34 +0000 (0:00:00.156) 0:00:11.779 ******** 2025-03-27 00:44:35.019701 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:35.022761 | orchestrator | 2025-03-27 00:44:35.023511 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-27 00:44:35.023744 | orchestrator | Thursday 27 March 2025 00:44:35 +0000 (0:00:00.380) 0:00:12.159 ******** 2025-03-27 00:44:35.185407 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:35.186560 | orchestrator | 2025-03-27 00:44:35.187403 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-27 00:44:35.188050 | orchestrator | Thursday 27 March 2025 00:44:35 +0000 (0:00:00.167) 0:00:12.327 ******** 2025-03-27 00:44:35.340732 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:35.340863 | orchestrator | 2025-03-27 00:44:35.343356 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-27 00:44:35.343504 | orchestrator | Thursday 27 March 2025 00:44:35 +0000 (0:00:00.153) 0:00:12.480 ******** 2025-03-27 00:44:35.589151 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e2bf155-ac50-562d-a3fc-a4d9038fe730'}}) 2025-03-27 00:44:35.591832 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd321ea45-1a00-5698-8092-45c793cb3b8c'}}) 2025-03-27 00:44:35.592871 | orchestrator | 2025-03-27 00:44:35.592897 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-27 00:44:35.592918 | orchestrator | Thursday 27 March 2025 00:44:35 +0000 (0:00:00.245) 0:00:12.725 ******** 2025-03-27 00:44:35.939522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e2bf155-ac50-562d-a3fc-a4d9038fe730'}})  2025-03-27 00:44:35.941046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd321ea45-1a00-5698-8092-45c793cb3b8c'}})  2025-03-27 00:44:35.941972 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:35.941999 | orchestrator | 2025-03-27 00:44:35.942057 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-27 00:44:35.942094 | orchestrator | Thursday 27 March 2025 00:44:35 +0000 (0:00:00.349) 0:00:13.075 ******** 2025-03-27 00:44:36.315749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e2bf155-ac50-562d-a3fc-a4d9038fe730'}})  2025-03-27 00:44:36.318371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd321ea45-1a00-5698-8092-45c793cb3b8c'}})  2025-03-27 00:44:36.318479 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:36.320288 | orchestrator | 2025-03-27 00:44:36.321154 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-27 00:44:36.327041 | orchestrator | Thursday 27 March 2025 00:44:36 +0000 (0:00:00.381) 0:00:13.456 ******** 2025-03-27 00:44:36.628525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e2bf155-ac50-562d-a3fc-a4d9038fe730'}})  2025-03-27 00:44:36.629398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd321ea45-1a00-5698-8092-45c793cb3b8c'}})  2025-03-27 00:44:36.629753 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:36.630295 | orchestrator | 2025-03-27 00:44:36.631898 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-27 00:44:36.633065 | orchestrator | Thursday 27 March 2025 00:44:36 +0000 (0:00:00.308) 0:00:13.765 ******** 2025-03-27 00:44:36.847148 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:36.850482 | orchestrator | 2025-03-27 00:44:36.851202 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-27 00:44:36.851634 | orchestrator | Thursday 27 March 2025 00:44:36 +0000 (0:00:00.222) 0:00:13.987 ******** 2025-03-27 00:44:37.058930 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:44:37.059359 | orchestrator | 2025-03-27 00:44:37.060215 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-27 00:44:37.060542 | orchestrator | Thursday 27 March 2025 00:44:37 +0000 (0:00:00.210) 0:00:14.198 ******** 2025-03-27 00:44:37.230216 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:37.232311 | orchestrator | 2025-03-27 00:44:37.233497 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-27 00:44:37.235082 | orchestrator | Thursday 27 March 2025 00:44:37 +0000 (0:00:00.171) 0:00:14.370 ******** 2025-03-27 00:44:37.373918 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:37.374487 | orchestrator | 2025-03-27 00:44:37.375839 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-27 00:44:37.379144 | orchestrator | Thursday 27 March 2025 00:44:37 +0000 (0:00:00.141) 0:00:14.511 ******** 2025-03-27 00:44:37.822926 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:37.826959 | orchestrator | 2025-03-27 00:44:37.830121 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-27 00:44:38.025289 | orchestrator | Thursday 27 March 2025 00:44:37 +0000 (0:00:00.452) 0:00:14.963 ******** 2025-03-27 00:44:38.025394 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:44:38.025467 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:44:38.026444 | orchestrator |  "sdb": { 2025-03-27 00:44:38.027676 | orchestrator |  "osd_lvm_uuid": "5e2bf155-ac50-562d-a3fc-a4d9038fe730" 2025-03-27 00:44:38.028629 | orchestrator |  }, 2025-03-27 00:44:38.029600 | orchestrator |  "sdc": { 2025-03-27 00:44:38.030310 | orchestrator |  "osd_lvm_uuid": "d321ea45-1a00-5698-8092-45c793cb3b8c" 2025-03-27 00:44:38.031156 | orchestrator |  } 2025-03-27 00:44:38.031713 | orchestrator |  } 2025-03-27 00:44:38.032482 | orchestrator | } 2025-03-27 00:44:38.033657 | orchestrator | 2025-03-27 00:44:38.035133 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-27 00:44:38.036442 | orchestrator | Thursday 27 March 2025 00:44:38 +0000 (0:00:00.200) 0:00:15.164 ******** 2025-03-27 00:44:38.179555 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:38.179737 | orchestrator | 2025-03-27 00:44:38.183623 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-27 00:44:38.339786 | orchestrator | Thursday 27 March 2025 00:44:38 +0000 (0:00:00.153) 0:00:15.317 ******** 2025-03-27 00:44:38.339859 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:38.341067 | orchestrator | 2025-03-27 00:44:38.341404 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-27 00:44:38.341434 | orchestrator | Thursday 27 March 2025 00:44:38 +0000 (0:00:00.160) 0:00:15.478 ******** 2025-03-27 00:44:38.492726 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:44:38.493900 | orchestrator | 2025-03-27 00:44:38.497110 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-27 00:44:38.499672 | orchestrator | Thursday 27 March 2025 00:44:38 +0000 (0:00:00.151) 0:00:15.629 ******** 2025-03-27 00:44:38.802257 | orchestrator | changed: [testbed-node-3] => { 2025-03-27 00:44:38.803121 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-27 00:44:38.805751 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:44:38.807381 | orchestrator |  "sdb": { 2025-03-27 00:44:38.808080 | orchestrator |  "osd_lvm_uuid": "5e2bf155-ac50-562d-a3fc-a4d9038fe730" 2025-03-27 00:44:38.810249 | orchestrator |  }, 2025-03-27 00:44:38.811364 | orchestrator |  "sdc": { 2025-03-27 00:44:38.811398 | orchestrator |  "osd_lvm_uuid": "d321ea45-1a00-5698-8092-45c793cb3b8c" 2025-03-27 00:44:38.813037 | orchestrator |  } 2025-03-27 00:44:38.814308 | orchestrator |  }, 2025-03-27 00:44:38.814771 | orchestrator |  "lvm_volumes": [ 2025-03-27 00:44:38.816141 | orchestrator |  { 2025-03-27 00:44:38.817106 | orchestrator |  "data": "osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730", 2025-03-27 00:44:38.818150 | orchestrator |  "data_vg": "ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730" 2025-03-27 00:44:38.818867 | orchestrator |  }, 2025-03-27 00:44:38.819394 | orchestrator |  { 2025-03-27 00:44:38.819986 | orchestrator |  "data": "osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c", 2025-03-27 00:44:38.821262 | orchestrator |  "data_vg": "ceph-d321ea45-1a00-5698-8092-45c793cb3b8c" 2025-03-27 00:44:38.821351 | orchestrator |  } 2025-03-27 00:44:38.822245 | orchestrator |  ] 2025-03-27 00:44:38.823114 | orchestrator |  } 2025-03-27 00:44:38.823979 | orchestrator | } 2025-03-27 00:44:38.825223 | orchestrator | 2025-03-27 00:44:38.826618 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-27 00:44:38.828368 | orchestrator | Thursday 27 March 2025 00:44:38 +0000 (0:00:00.311) 0:00:15.941 ******** 2025-03-27 00:44:41.073786 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-27 00:44:41.074119 | orchestrator | 2025-03-27 00:44:41.077281 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-27 00:44:41.077372 | orchestrator | 2025-03-27 00:44:41.077394 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:44:41.078872 | orchestrator | Thursday 27 March 2025 00:44:41 +0000 (0:00:02.269) 0:00:18.211 ******** 2025-03-27 00:44:41.338076 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-27 00:44:41.342455 | orchestrator | 2025-03-27 00:44:41.343544 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:44:41.344828 | orchestrator | Thursday 27 March 2025 00:44:41 +0000 (0:00:00.263) 0:00:18.474 ******** 2025-03-27 00:44:41.589877 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:41.591155 | orchestrator | 2025-03-27 00:44:41.592458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:41.593587 | orchestrator | Thursday 27 March 2025 00:44:41 +0000 (0:00:00.255) 0:00:18.729 ******** 2025-03-27 00:44:41.992348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-27 00:44:41.994300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-27 00:44:42.000628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-27 00:44:42.001900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-27 00:44:42.003275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-27 00:44:42.005402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-27 00:44:42.008500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-27 00:44:42.009970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-27 00:44:42.009995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-27 00:44:42.010014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-27 00:44:42.010605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-27 00:44:42.012298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-27 00:44:42.013235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-27 00:44:42.013890 | orchestrator | 2025-03-27 00:44:42.014463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:42.015309 | orchestrator | Thursday 27 March 2025 00:44:41 +0000 (0:00:00.402) 0:00:19.132 ******** 2025-03-27 00:44:42.247514 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:42.247741 | orchestrator | 2025-03-27 00:44:42.247768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:42.247790 | orchestrator | Thursday 27 March 2025 00:44:42 +0000 (0:00:00.253) 0:00:19.385 ******** 2025-03-27 00:44:42.458201 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:42.459535 | orchestrator | 2025-03-27 00:44:42.461072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:42.461630 | orchestrator | Thursday 27 March 2025 00:44:42 +0000 (0:00:00.212) 0:00:19.597 ******** 2025-03-27 00:44:42.680080 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:42.680849 | orchestrator | 2025-03-27 00:44:42.681089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:42.684275 | orchestrator | Thursday 27 March 2025 00:44:42 +0000 (0:00:00.218) 0:00:19.815 ******** 2025-03-27 00:44:43.416787 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:43.418359 | orchestrator | 2025-03-27 00:44:43.418409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:43.419630 | orchestrator | Thursday 27 March 2025 00:44:43 +0000 (0:00:00.740) 0:00:20.556 ******** 2025-03-27 00:44:43.630339 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:43.630763 | orchestrator | 2025-03-27 00:44:43.633113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:43.844330 | orchestrator | Thursday 27 March 2025 00:44:43 +0000 (0:00:00.213) 0:00:20.770 ******** 2025-03-27 00:44:43.844393 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:43.848223 | orchestrator | 2025-03-27 00:44:43.849031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:43.851568 | orchestrator | Thursday 27 March 2025 00:44:43 +0000 (0:00:00.213) 0:00:20.983 ******** 2025-03-27 00:44:44.076644 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:44.078800 | orchestrator | 2025-03-27 00:44:44.080677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:44.081824 | orchestrator | Thursday 27 March 2025 00:44:44 +0000 (0:00:00.229) 0:00:21.212 ******** 2025-03-27 00:44:44.307872 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:44.311339 | orchestrator | 2025-03-27 00:44:44.312136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:44.312201 | orchestrator | Thursday 27 March 2025 00:44:44 +0000 (0:00:00.232) 0:00:21.445 ******** 2025-03-27 00:44:44.789062 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666) 2025-03-27 00:44:44.790654 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666) 2025-03-27 00:44:44.791212 | orchestrator | 2025-03-27 00:44:44.791545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:44.791964 | orchestrator | Thursday 27 March 2025 00:44:44 +0000 (0:00:00.484) 0:00:21.929 ******** 2025-03-27 00:44:45.340558 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8) 2025-03-27 00:44:45.341364 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8) 2025-03-27 00:44:45.345793 | orchestrator | 2025-03-27 00:44:45.346337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:45.346751 | orchestrator | Thursday 27 March 2025 00:44:45 +0000 (0:00:00.551) 0:00:22.480 ******** 2025-03-27 00:44:45.858579 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a) 2025-03-27 00:44:45.860811 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a) 2025-03-27 00:44:45.862927 | orchestrator | 2025-03-27 00:44:45.863341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:45.866456 | orchestrator | Thursday 27 March 2025 00:44:45 +0000 (0:00:00.518) 0:00:22.998 ******** 2025-03-27 00:44:46.564372 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10) 2025-03-27 00:44:46.565924 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10) 2025-03-27 00:44:46.569365 | orchestrator | 2025-03-27 00:44:46.569402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:46.569460 | orchestrator | Thursday 27 March 2025 00:44:46 +0000 (0:00:00.701) 0:00:23.700 ******** 2025-03-27 00:44:47.458904 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:44:47.459989 | orchestrator | 2025-03-27 00:44:47.460532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:47.461851 | orchestrator | Thursday 27 March 2025 00:44:47 +0000 (0:00:00.892) 0:00:24.593 ******** 2025-03-27 00:44:47.934574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-27 00:44:47.937514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-27 00:44:47.940112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-27 00:44:47.940984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-27 00:44:47.942074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-27 00:44:47.943547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-27 00:44:47.944752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-27 00:44:47.945859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-27 00:44:47.946527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-27 00:44:47.947890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-27 00:44:47.948583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-27 00:44:47.949506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-27 00:44:47.950138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-27 00:44:47.950960 | orchestrator | 2025-03-27 00:44:47.951593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:47.952328 | orchestrator | Thursday 27 March 2025 00:44:47 +0000 (0:00:00.480) 0:00:25.074 ******** 2025-03-27 00:44:48.174491 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:48.175956 | orchestrator | 2025-03-27 00:44:48.177071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:48.177460 | orchestrator | Thursday 27 March 2025 00:44:48 +0000 (0:00:00.239) 0:00:25.314 ******** 2025-03-27 00:44:48.412822 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:48.413125 | orchestrator | 2025-03-27 00:44:48.414115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:48.414517 | orchestrator | Thursday 27 March 2025 00:44:48 +0000 (0:00:00.235) 0:00:25.550 ******** 2025-03-27 00:44:48.643805 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:48.647072 | orchestrator | 2025-03-27 00:44:48.648050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:48.648082 | orchestrator | Thursday 27 March 2025 00:44:48 +0000 (0:00:00.231) 0:00:25.781 ******** 2025-03-27 00:44:48.851391 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:48.851903 | orchestrator | 2025-03-27 00:44:48.852629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:48.853485 | orchestrator | Thursday 27 March 2025 00:44:48 +0000 (0:00:00.210) 0:00:25.991 ******** 2025-03-27 00:44:49.053456 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:49.054096 | orchestrator | 2025-03-27 00:44:49.055106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:49.056002 | orchestrator | Thursday 27 March 2025 00:44:49 +0000 (0:00:00.200) 0:00:26.192 ******** 2025-03-27 00:44:49.278314 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:49.278874 | orchestrator | 2025-03-27 00:44:49.279215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:49.280245 | orchestrator | Thursday 27 March 2025 00:44:49 +0000 (0:00:00.225) 0:00:26.417 ******** 2025-03-27 00:44:49.523113 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:49.523522 | orchestrator | 2025-03-27 00:44:49.523571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:49.525372 | orchestrator | Thursday 27 March 2025 00:44:49 +0000 (0:00:00.242) 0:00:26.660 ******** 2025-03-27 00:44:49.791218 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:49.792026 | orchestrator | 2025-03-27 00:44:49.792504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:49.793408 | orchestrator | Thursday 27 March 2025 00:44:49 +0000 (0:00:00.268) 0:00:26.929 ******** 2025-03-27 00:44:50.978874 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-27 00:44:50.979351 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-27 00:44:50.979539 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-27 00:44:50.981987 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-27 00:44:51.178645 | orchestrator | 2025-03-27 00:44:51.178752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:51.178768 | orchestrator | Thursday 27 March 2025 00:44:50 +0000 (0:00:01.188) 0:00:28.117 ******** 2025-03-27 00:44:51.178796 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:51.179067 | orchestrator | 2025-03-27 00:44:51.181948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:51.183482 | orchestrator | Thursday 27 March 2025 00:44:51 +0000 (0:00:00.201) 0:00:28.318 ******** 2025-03-27 00:44:51.409329 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:51.412081 | orchestrator | 2025-03-27 00:44:51.413472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:51.413915 | orchestrator | Thursday 27 March 2025 00:44:51 +0000 (0:00:00.228) 0:00:28.546 ******** 2025-03-27 00:44:51.630852 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:51.633606 | orchestrator | 2025-03-27 00:44:51.634209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:44:51.635351 | orchestrator | Thursday 27 March 2025 00:44:51 +0000 (0:00:00.223) 0:00:28.770 ******** 2025-03-27 00:44:51.847373 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:51.847748 | orchestrator | 2025-03-27 00:44:51.847782 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-27 00:44:51.849117 | orchestrator | Thursday 27 March 2025 00:44:51 +0000 (0:00:00.217) 0:00:28.987 ******** 2025-03-27 00:44:52.061512 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-27 00:44:52.061943 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-27 00:44:52.062695 | orchestrator | 2025-03-27 00:44:52.063430 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-27 00:44:52.064647 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.214) 0:00:29.202 ******** 2025-03-27 00:44:52.233688 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:52.234857 | orchestrator | 2025-03-27 00:44:52.236107 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-27 00:44:52.236796 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.171) 0:00:29.373 ******** 2025-03-27 00:44:52.385530 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:52.385897 | orchestrator | 2025-03-27 00:44:52.386431 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-27 00:44:52.387034 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.151) 0:00:29.525 ******** 2025-03-27 00:44:52.531557 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:52.532311 | orchestrator | 2025-03-27 00:44:52.533498 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-27 00:44:52.535141 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.146) 0:00:29.671 ******** 2025-03-27 00:44:52.681190 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:52.682643 | orchestrator | 2025-03-27 00:44:52.685428 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-27 00:44:52.881676 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.148) 0:00:29.820 ******** 2025-03-27 00:44:52.881749 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bac76156-9f65-5e37-8447-16c40269f5cf'}}) 2025-03-27 00:44:52.883331 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}}) 2025-03-27 00:44:52.884831 | orchestrator | 2025-03-27 00:44:52.886660 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-27 00:44:53.440865 | orchestrator | Thursday 27 March 2025 00:44:52 +0000 (0:00:00.201) 0:00:30.022 ******** 2025-03-27 00:44:53.440979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bac76156-9f65-5e37-8447-16c40269f5cf'}})  2025-03-27 00:44:53.441295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}})  2025-03-27 00:44:53.441325 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:53.441347 | orchestrator | 2025-03-27 00:44:53.442387 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-27 00:44:53.617350 | orchestrator | Thursday 27 March 2025 00:44:53 +0000 (0:00:00.556) 0:00:30.578 ******** 2025-03-27 00:44:53.617401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bac76156-9f65-5e37-8447-16c40269f5cf'}})  2025-03-27 00:44:53.618006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}})  2025-03-27 00:44:53.618508 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:53.620423 | orchestrator | 2025-03-27 00:44:53.621045 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-27 00:44:53.622309 | orchestrator | Thursday 27 March 2025 00:44:53 +0000 (0:00:00.179) 0:00:30.758 ******** 2025-03-27 00:44:53.790782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bac76156-9f65-5e37-8447-16c40269f5cf'}})  2025-03-27 00:44:53.791247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}})  2025-03-27 00:44:53.791855 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:53.792510 | orchestrator | 2025-03-27 00:44:53.792967 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-27 00:44:53.793978 | orchestrator | Thursday 27 March 2025 00:44:53 +0000 (0:00:00.172) 0:00:30.931 ******** 2025-03-27 00:44:53.947366 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:53.948482 | orchestrator | 2025-03-27 00:44:53.949060 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-27 00:44:53.950213 | orchestrator | Thursday 27 March 2025 00:44:53 +0000 (0:00:00.156) 0:00:31.087 ******** 2025-03-27 00:44:54.107648 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:44:54.108183 | orchestrator | 2025-03-27 00:44:54.108392 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-27 00:44:54.109359 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.159) 0:00:31.247 ******** 2025-03-27 00:44:54.266778 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:54.268076 | orchestrator | 2025-03-27 00:44:54.269600 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-27 00:44:54.271962 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.157) 0:00:31.405 ******** 2025-03-27 00:44:54.425627 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:54.426150 | orchestrator | 2025-03-27 00:44:54.428759 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-27 00:44:54.573738 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.158) 0:00:31.564 ******** 2025-03-27 00:44:54.573811 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:54.577420 | orchestrator | 2025-03-27 00:44:54.578863 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-27 00:44:54.722369 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.148) 0:00:31.712 ******** 2025-03-27 00:44:54.722449 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:44:54.723998 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:44:54.725386 | orchestrator |  "sdb": { 2025-03-27 00:44:54.728233 | orchestrator |  "osd_lvm_uuid": "bac76156-9f65-5e37-8447-16c40269f5cf" 2025-03-27 00:44:54.728857 | orchestrator |  }, 2025-03-27 00:44:54.729962 | orchestrator |  "sdc": { 2025-03-27 00:44:54.730138 | orchestrator |  "osd_lvm_uuid": "cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b" 2025-03-27 00:44:54.730453 | orchestrator |  } 2025-03-27 00:44:54.730785 | orchestrator |  } 2025-03-27 00:44:54.731287 | orchestrator | } 2025-03-27 00:44:54.731465 | orchestrator | 2025-03-27 00:44:54.732337 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-27 00:44:54.733156 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.149) 0:00:31.862 ******** 2025-03-27 00:44:54.860418 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:54.861093 | orchestrator | 2025-03-27 00:44:54.861702 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-27 00:44:54.861956 | orchestrator | Thursday 27 March 2025 00:44:54 +0000 (0:00:00.138) 0:00:32.001 ******** 2025-03-27 00:44:55.018918 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:55.019462 | orchestrator | 2025-03-27 00:44:55.020762 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-27 00:44:55.021286 | orchestrator | Thursday 27 March 2025 00:44:55 +0000 (0:00:00.157) 0:00:32.159 ******** 2025-03-27 00:44:55.174120 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:44:55.174870 | orchestrator | 2025-03-27 00:44:55.175881 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-27 00:44:55.176206 | orchestrator | Thursday 27 March 2025 00:44:55 +0000 (0:00:00.154) 0:00:32.314 ******** 2025-03-27 00:44:55.702959 | orchestrator | changed: [testbed-node-4] => { 2025-03-27 00:44:55.704424 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-27 00:44:55.707907 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:44:55.710146 | orchestrator |  "sdb": { 2025-03-27 00:44:55.710270 | orchestrator |  "osd_lvm_uuid": "bac76156-9f65-5e37-8447-16c40269f5cf" 2025-03-27 00:44:55.710291 | orchestrator |  }, 2025-03-27 00:44:55.710311 | orchestrator |  "sdc": { 2025-03-27 00:44:55.711130 | orchestrator |  "osd_lvm_uuid": "cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b" 2025-03-27 00:44:55.711609 | orchestrator |  } 2025-03-27 00:44:55.712302 | orchestrator |  }, 2025-03-27 00:44:55.713016 | orchestrator |  "lvm_volumes": [ 2025-03-27 00:44:55.714250 | orchestrator |  { 2025-03-27 00:44:55.714777 | orchestrator |  "data": "osd-block-bac76156-9f65-5e37-8447-16c40269f5cf", 2025-03-27 00:44:55.715257 | orchestrator |  "data_vg": "ceph-bac76156-9f65-5e37-8447-16c40269f5cf" 2025-03-27 00:44:55.715590 | orchestrator |  }, 2025-03-27 00:44:55.716115 | orchestrator |  { 2025-03-27 00:44:55.716390 | orchestrator |  "data": "osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b", 2025-03-27 00:44:55.716826 | orchestrator |  "data_vg": "ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b" 2025-03-27 00:44:55.717479 | orchestrator |  } 2025-03-27 00:44:55.717855 | orchestrator |  ] 2025-03-27 00:44:55.718313 | orchestrator |  } 2025-03-27 00:44:55.719199 | orchestrator | } 2025-03-27 00:44:55.719442 | orchestrator | 2025-03-27 00:44:55.720091 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-27 00:44:55.720397 | orchestrator | Thursday 27 March 2025 00:44:55 +0000 (0:00:00.527) 0:00:32.842 ******** 2025-03-27 00:44:57.141926 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-27 00:44:57.143361 | orchestrator | 2025-03-27 00:44:57.144154 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-27 00:44:57.147957 | orchestrator | 2025-03-27 00:44:57.149080 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:44:57.150775 | orchestrator | Thursday 27 March 2025 00:44:57 +0000 (0:00:01.440) 0:00:34.282 ******** 2025-03-27 00:44:57.389483 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-27 00:44:57.390760 | orchestrator | 2025-03-27 00:44:57.392523 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:44:57.393879 | orchestrator | Thursday 27 March 2025 00:44:57 +0000 (0:00:00.246) 0:00:34.529 ******** 2025-03-27 00:44:58.043345 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:44:58.043952 | orchestrator | 2025-03-27 00:44:58.045660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:58.046583 | orchestrator | Thursday 27 March 2025 00:44:58 +0000 (0:00:00.651) 0:00:35.181 ******** 2025-03-27 00:44:58.455261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-27 00:44:58.457566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-27 00:44:58.458507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-27 00:44:58.458925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-27 00:44:58.459950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-27 00:44:58.461424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-27 00:44:58.463225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-27 00:44:58.463534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-27 00:44:58.464411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-27 00:44:58.466280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-27 00:44:58.466827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-27 00:44:58.467982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-27 00:44:58.468799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-27 00:44:58.469447 | orchestrator | 2025-03-27 00:44:58.470573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:58.471487 | orchestrator | Thursday 27 March 2025 00:44:58 +0000 (0:00:00.412) 0:00:35.593 ******** 2025-03-27 00:44:58.655906 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:58.657070 | orchestrator | 2025-03-27 00:44:58.658906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:58.659454 | orchestrator | Thursday 27 March 2025 00:44:58 +0000 (0:00:00.199) 0:00:35.793 ******** 2025-03-27 00:44:58.867688 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:58.868263 | orchestrator | 2025-03-27 00:44:58.869322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:58.872544 | orchestrator | Thursday 27 March 2025 00:44:58 +0000 (0:00:00.214) 0:00:36.007 ******** 2025-03-27 00:44:59.094216 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:59.095473 | orchestrator | 2025-03-27 00:44:59.095933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:59.096940 | orchestrator | Thursday 27 March 2025 00:44:59 +0000 (0:00:00.227) 0:00:36.235 ******** 2025-03-27 00:44:59.333113 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:59.333500 | orchestrator | 2025-03-27 00:44:59.334187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:59.335121 | orchestrator | Thursday 27 March 2025 00:44:59 +0000 (0:00:00.237) 0:00:36.473 ******** 2025-03-27 00:44:59.544830 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:59.545859 | orchestrator | 2025-03-27 00:44:59.546976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:59.548350 | orchestrator | Thursday 27 March 2025 00:44:59 +0000 (0:00:00.210) 0:00:36.683 ******** 2025-03-27 00:44:59.764311 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:44:59.765916 | orchestrator | 2025-03-27 00:44:59.766622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:44:59.767585 | orchestrator | Thursday 27 March 2025 00:44:59 +0000 (0:00:00.221) 0:00:36.904 ******** 2025-03-27 00:44:59.975004 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:00.185960 | orchestrator | 2025-03-27 00:45:00.186072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:00.186081 | orchestrator | Thursday 27 March 2025 00:44:59 +0000 (0:00:00.208) 0:00:37.112 ******** 2025-03-27 00:45:00.186097 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:00.187013 | orchestrator | 2025-03-27 00:45:00.187886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:00.188595 | orchestrator | Thursday 27 March 2025 00:45:00 +0000 (0:00:00.209) 0:00:37.322 ******** 2025-03-27 00:45:00.871745 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807) 2025-03-27 00:45:00.872731 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807) 2025-03-27 00:45:00.872925 | orchestrator | 2025-03-27 00:45:00.873958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:00.877472 | orchestrator | Thursday 27 March 2025 00:45:00 +0000 (0:00:00.686) 0:00:38.008 ******** 2025-03-27 00:45:01.345023 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac) 2025-03-27 00:45:01.347067 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac) 2025-03-27 00:45:01.348242 | orchestrator | 2025-03-27 00:45:01.351060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:01.352012 | orchestrator | Thursday 27 March 2025 00:45:01 +0000 (0:00:00.476) 0:00:38.485 ******** 2025-03-27 00:45:01.832516 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d) 2025-03-27 00:45:01.833393 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d) 2025-03-27 00:45:01.834127 | orchestrator | 2025-03-27 00:45:01.836638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:02.317085 | orchestrator | Thursday 27 March 2025 00:45:01 +0000 (0:00:00.486) 0:00:38.971 ******** 2025-03-27 00:45:02.317234 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8) 2025-03-27 00:45:02.320041 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8) 2025-03-27 00:45:02.673095 | orchestrator | 2025-03-27 00:45:02.673257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:45:02.673277 | orchestrator | Thursday 27 March 2025 00:45:02 +0000 (0:00:00.485) 0:00:39.457 ******** 2025-03-27 00:45:02.673309 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:45:02.676242 | orchestrator | 2025-03-27 00:45:02.677015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:02.677723 | orchestrator | Thursday 27 March 2025 00:45:02 +0000 (0:00:00.354) 0:00:39.811 ******** 2025-03-27 00:45:03.102139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-27 00:45:03.103241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-27 00:45:03.105433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-27 00:45:03.106447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-27 00:45:03.107692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-27 00:45:03.109072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-27 00:45:03.109943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-27 00:45:03.110711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-27 00:45:03.111322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-27 00:45:03.111764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-27 00:45:03.112569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-27 00:45:03.112638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-27 00:45:03.113140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-27 00:45:03.113525 | orchestrator | 2025-03-27 00:45:03.114082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:03.114244 | orchestrator | Thursday 27 March 2025 00:45:03 +0000 (0:00:00.430) 0:00:40.241 ******** 2025-03-27 00:45:03.314278 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:03.315038 | orchestrator | 2025-03-27 00:45:03.316903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:03.318773 | orchestrator | Thursday 27 March 2025 00:45:03 +0000 (0:00:00.212) 0:00:40.454 ******** 2025-03-27 00:45:03.547010 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:03.548401 | orchestrator | 2025-03-27 00:45:03.551209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:03.551667 | orchestrator | Thursday 27 March 2025 00:45:03 +0000 (0:00:00.232) 0:00:40.686 ******** 2025-03-27 00:45:03.771669 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:03.773025 | orchestrator | 2025-03-27 00:45:03.774143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:03.776876 | orchestrator | Thursday 27 March 2025 00:45:03 +0000 (0:00:00.225) 0:00:40.912 ******** 2025-03-27 00:45:04.414314 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:04.416544 | orchestrator | 2025-03-27 00:45:04.417288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:04.418443 | orchestrator | Thursday 27 March 2025 00:45:04 +0000 (0:00:00.642) 0:00:41.555 ******** 2025-03-27 00:45:04.660355 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:04.660980 | orchestrator | 2025-03-27 00:45:04.661949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:04.663078 | orchestrator | Thursday 27 March 2025 00:45:04 +0000 (0:00:00.242) 0:00:41.797 ******** 2025-03-27 00:45:04.876988 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:04.877550 | orchestrator | 2025-03-27 00:45:04.879821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:04.879985 | orchestrator | Thursday 27 March 2025 00:45:04 +0000 (0:00:00.219) 0:00:42.016 ******** 2025-03-27 00:45:05.106452 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:05.108551 | orchestrator | 2025-03-27 00:45:05.109628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:05.110225 | orchestrator | Thursday 27 March 2025 00:45:05 +0000 (0:00:00.229) 0:00:42.246 ******** 2025-03-27 00:45:05.331511 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:05.332304 | orchestrator | 2025-03-27 00:45:05.333890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:05.334293 | orchestrator | Thursday 27 March 2025 00:45:05 +0000 (0:00:00.224) 0:00:42.471 ******** 2025-03-27 00:45:06.017769 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-27 00:45:06.019041 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-27 00:45:06.019594 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-27 00:45:06.022621 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-27 00:45:06.022988 | orchestrator | 2025-03-27 00:45:06.023354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:06.024374 | orchestrator | Thursday 27 March 2025 00:45:06 +0000 (0:00:00.686) 0:00:43.158 ******** 2025-03-27 00:45:06.220478 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:06.221133 | orchestrator | 2025-03-27 00:45:06.221968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:06.229124 | orchestrator | Thursday 27 March 2025 00:45:06 +0000 (0:00:00.201) 0:00:43.360 ******** 2025-03-27 00:45:06.452891 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:06.454301 | orchestrator | 2025-03-27 00:45:06.455374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:06.456063 | orchestrator | Thursday 27 March 2025 00:45:06 +0000 (0:00:00.232) 0:00:43.593 ******** 2025-03-27 00:45:06.670325 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:06.670612 | orchestrator | 2025-03-27 00:45:06.671387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:45:06.674013 | orchestrator | Thursday 27 March 2025 00:45:06 +0000 (0:00:00.216) 0:00:43.809 ******** 2025-03-27 00:45:06.877728 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:06.878725 | orchestrator | 2025-03-27 00:45:06.879495 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-27 00:45:06.880322 | orchestrator | Thursday 27 March 2025 00:45:06 +0000 (0:00:00.206) 0:00:44.016 ******** 2025-03-27 00:45:07.281411 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-27 00:45:07.285874 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-27 00:45:07.286257 | orchestrator | 2025-03-27 00:45:07.286286 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-27 00:45:07.286307 | orchestrator | Thursday 27 March 2025 00:45:07 +0000 (0:00:00.404) 0:00:44.421 ******** 2025-03-27 00:45:07.446583 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:07.447369 | orchestrator | 2025-03-27 00:45:07.447407 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-27 00:45:07.447468 | orchestrator | Thursday 27 March 2025 00:45:07 +0000 (0:00:00.165) 0:00:44.587 ******** 2025-03-27 00:45:07.607309 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:07.607817 | orchestrator | 2025-03-27 00:45:07.608918 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-27 00:45:07.611634 | orchestrator | Thursday 27 March 2025 00:45:07 +0000 (0:00:00.159) 0:00:44.746 ******** 2025-03-27 00:45:07.748470 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:07.749712 | orchestrator | 2025-03-27 00:45:07.750772 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-27 00:45:07.751426 | orchestrator | Thursday 27 March 2025 00:45:07 +0000 (0:00:00.141) 0:00:44.888 ******** 2025-03-27 00:45:07.908393 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:45:07.908911 | orchestrator | 2025-03-27 00:45:07.911311 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-27 00:45:08.096215 | orchestrator | Thursday 27 March 2025 00:45:07 +0000 (0:00:00.160) 0:00:45.048 ******** 2025-03-27 00:45:08.096285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '923c5540-3b69-54d6-b090-bccde0d698f1'}}) 2025-03-27 00:45:08.097563 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8acd0346-cc61-560a-be8a-825f05553edd'}}) 2025-03-27 00:45:08.098517 | orchestrator | 2025-03-27 00:45:08.099353 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-27 00:45:08.103178 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.187) 0:00:45.236 ******** 2025-03-27 00:45:08.301343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '923c5540-3b69-54d6-b090-bccde0d698f1'}})  2025-03-27 00:45:08.302118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8acd0346-cc61-560a-be8a-825f05553edd'}})  2025-03-27 00:45:08.303420 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:08.303928 | orchestrator | 2025-03-27 00:45:08.305261 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-27 00:45:08.305610 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.203) 0:00:45.440 ******** 2025-03-27 00:45:08.494073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '923c5540-3b69-54d6-b090-bccde0d698f1'}})  2025-03-27 00:45:08.494259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8acd0346-cc61-560a-be8a-825f05553edd'}})  2025-03-27 00:45:08.494289 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:08.494853 | orchestrator | 2025-03-27 00:45:08.495401 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-27 00:45:08.496353 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.193) 0:00:45.634 ******** 2025-03-27 00:45:08.665232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '923c5540-3b69-54d6-b090-bccde0d698f1'}})  2025-03-27 00:45:08.668440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8acd0346-cc61-560a-be8a-825f05553edd'}})  2025-03-27 00:45:08.668534 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:08.669195 | orchestrator | 2025-03-27 00:45:08.669227 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-27 00:45:08.669898 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.169) 0:00:45.803 ******** 2025-03-27 00:45:08.814523 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:45:08.815922 | orchestrator | 2025-03-27 00:45:08.817302 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-27 00:45:08.817629 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.150) 0:00:45.954 ******** 2025-03-27 00:45:08.978241 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:45:08.979278 | orchestrator | 2025-03-27 00:45:08.979792 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-27 00:45:08.979807 | orchestrator | Thursday 27 March 2025 00:45:08 +0000 (0:00:00.164) 0:00:46.118 ******** 2025-03-27 00:45:09.114819 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:09.116077 | orchestrator | 2025-03-27 00:45:09.116119 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-27 00:45:09.116828 | orchestrator | Thursday 27 March 2025 00:45:09 +0000 (0:00:00.134) 0:00:46.253 ******** 2025-03-27 00:45:09.510269 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:09.511405 | orchestrator | 2025-03-27 00:45:09.512377 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-27 00:45:09.514000 | orchestrator | Thursday 27 March 2025 00:45:09 +0000 (0:00:00.395) 0:00:46.649 ******** 2025-03-27 00:45:09.663762 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:09.665328 | orchestrator | 2025-03-27 00:45:09.666268 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-27 00:45:09.669357 | orchestrator | Thursday 27 March 2025 00:45:09 +0000 (0:00:00.154) 0:00:46.803 ******** 2025-03-27 00:45:09.819651 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:45:09.820954 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:45:09.820995 | orchestrator |  "sdb": { 2025-03-27 00:45:09.821685 | orchestrator |  "osd_lvm_uuid": "923c5540-3b69-54d6-b090-bccde0d698f1" 2025-03-27 00:45:09.823852 | orchestrator |  }, 2025-03-27 00:45:09.825570 | orchestrator |  "sdc": { 2025-03-27 00:45:09.825599 | orchestrator |  "osd_lvm_uuid": "8acd0346-cc61-560a-be8a-825f05553edd" 2025-03-27 00:45:09.826509 | orchestrator |  } 2025-03-27 00:45:09.827926 | orchestrator |  } 2025-03-27 00:45:09.828801 | orchestrator | } 2025-03-27 00:45:09.829803 | orchestrator | 2025-03-27 00:45:09.831374 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-27 00:45:09.832223 | orchestrator | Thursday 27 March 2025 00:45:09 +0000 (0:00:00.156) 0:00:46.959 ******** 2025-03-27 00:45:09.952568 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:09.954138 | orchestrator | 2025-03-27 00:45:09.955526 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-27 00:45:09.956627 | orchestrator | Thursday 27 March 2025 00:45:09 +0000 (0:00:00.133) 0:00:47.093 ******** 2025-03-27 00:45:10.134268 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:10.135479 | orchestrator | 2025-03-27 00:45:10.135589 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-27 00:45:10.136939 | orchestrator | Thursday 27 March 2025 00:45:10 +0000 (0:00:00.180) 0:00:47.273 ******** 2025-03-27 00:45:10.301299 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:45:10.302458 | orchestrator | 2025-03-27 00:45:10.304112 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-27 00:45:10.304948 | orchestrator | Thursday 27 March 2025 00:45:10 +0000 (0:00:00.166) 0:00:47.440 ******** 2025-03-27 00:45:10.612674 | orchestrator | changed: [testbed-node-5] => { 2025-03-27 00:45:10.613656 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-27 00:45:10.614914 | orchestrator |  "ceph_osd_devices": { 2025-03-27 00:45:10.616252 | orchestrator |  "sdb": { 2025-03-27 00:45:10.617343 | orchestrator |  "osd_lvm_uuid": "923c5540-3b69-54d6-b090-bccde0d698f1" 2025-03-27 00:45:10.618829 | orchestrator |  }, 2025-03-27 00:45:10.619477 | orchestrator |  "sdc": { 2025-03-27 00:45:10.620314 | orchestrator |  "osd_lvm_uuid": "8acd0346-cc61-560a-be8a-825f05553edd" 2025-03-27 00:45:10.620809 | orchestrator |  } 2025-03-27 00:45:10.621907 | orchestrator |  }, 2025-03-27 00:45:10.623047 | orchestrator |  "lvm_volumes": [ 2025-03-27 00:45:10.624062 | orchestrator |  { 2025-03-27 00:45:10.624671 | orchestrator |  "data": "osd-block-923c5540-3b69-54d6-b090-bccde0d698f1", 2025-03-27 00:45:10.625831 | orchestrator |  "data_vg": "ceph-923c5540-3b69-54d6-b090-bccde0d698f1" 2025-03-27 00:45:10.626086 | orchestrator |  }, 2025-03-27 00:45:10.626821 | orchestrator |  { 2025-03-27 00:45:10.627516 | orchestrator |  "data": "osd-block-8acd0346-cc61-560a-be8a-825f05553edd", 2025-03-27 00:45:10.628397 | orchestrator |  "data_vg": "ceph-8acd0346-cc61-560a-be8a-825f05553edd" 2025-03-27 00:45:10.629158 | orchestrator |  } 2025-03-27 00:45:10.629209 | orchestrator |  ] 2025-03-27 00:45:10.629515 | orchestrator |  } 2025-03-27 00:45:10.630133 | orchestrator | } 2025-03-27 00:45:10.631044 | orchestrator | 2025-03-27 00:45:10.631625 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-27 00:45:10.632023 | orchestrator | Thursday 27 March 2025 00:45:10 +0000 (0:00:00.312) 0:00:47.753 ******** 2025-03-27 00:45:12.006370 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-27 00:45:12.008256 | orchestrator | 2025-03-27 00:45:12.011619 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:45:12.011671 | orchestrator | 2025-03-27 00:45:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:45:12.012209 | orchestrator | 2025-03-27 00:45:12 | INFO  | Please wait and do not abort execution. 2025-03-27 00:45:12.012242 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-27 00:45:12.013389 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-27 00:45:12.014669 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-27 00:45:12.015260 | orchestrator | 2025-03-27 00:45:12.015290 | orchestrator | 2025-03-27 00:45:12.016940 | orchestrator | 2025-03-27 00:45:12.017326 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:45:12.017356 | orchestrator | Thursday 27 March 2025 00:45:11 +0000 (0:00:01.391) 0:00:49.144 ******** 2025-03-27 00:45:12.018182 | orchestrator | =============================================================================== 2025-03-27 00:45:12.019014 | orchestrator | Write configuration file ------------------------------------------------ 5.10s 2025-03-27 00:45:12.019972 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2025-03-27 00:45:12.020705 | orchestrator | Add known links to the list of available block devices ------------------ 1.36s 2025-03-27 00:45:12.021398 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-03-27 00:45:12.022187 | orchestrator | Get initial list of available block devices ----------------------------- 1.17s 2025-03-27 00:45:12.022955 | orchestrator | Print configuration data ------------------------------------------------ 1.15s 2025-03-27 00:45:12.023317 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 1.11s 2025-03-27 00:45:12.024093 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2025-03-27 00:45:12.024772 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-03-27 00:45:12.025292 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-03-27 00:45:12.026121 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-03-27 00:45:12.026456 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2025-03-27 00:45:12.027020 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.76s 2025-03-27 00:45:12.027516 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.75s 2025-03-27 00:45:12.027957 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-03-27 00:45:12.028668 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-03-27 00:45:12.029062 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-03-27 00:45:12.029539 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-03-27 00:45:12.030099 | orchestrator | Set WAL devices config data --------------------------------------------- 0.70s 2025-03-27 00:45:12.030400 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.69s 2025-03-27 00:45:24.320836 | orchestrator | 2025-03-27 00:45:24 | INFO  | Task 82149536-f703-4ddc-b766-240c2d4f7ac2 is running in background. Output coming soon. 2025-03-27 00:46:04.200313 | orchestrator | 2025-03-27 00:45:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-03-27 00:46:05.994357 | orchestrator | 2025-03-27 00:45:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-03-27 00:46:05.994468 | orchestrator | 2025-03-27 00:45:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-03-27 00:46:05.994487 | orchestrator | 2025-03-27 00:45:55 | INFO  | Handling group overwrites in 99-overwrite 2025-03-27 00:46:05.994516 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-03-27 00:46:05.994545 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-03-27 00:46:05.994561 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-03-27 00:46:05.994576 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group storage:children from 50-kolla 2025-03-27 00:46:05.994591 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group frr:children from 60-generic 2025-03-27 00:46:05.994606 | orchestrator | 2025-03-27 00:45:55 | INFO  | Handling group overwrites in 20-roles 2025-03-27 00:46:05.994621 | orchestrator | 2025-03-27 00:45:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-03-27 00:46:05.994636 | orchestrator | 2025-03-27 00:45:55 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-03-27 00:46:05.994650 | orchestrator | 2025-03-27 00:46:04 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-03-27 00:46:05.994683 | orchestrator | 2025-03-27 00:46:05 | INFO  | Task c226b631-608a-47d2-82f8-80c0fbc7c52a (ceph-create-lvm-devices) was prepared for execution. 2025-03-27 00:46:09.206966 | orchestrator | 2025-03-27 00:46:05 | INFO  | It takes a moment until task c226b631-608a-47d2-82f8-80c0fbc7c52a (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-27 00:46:09.207106 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 00:46:09.728643 | orchestrator | 2025-03-27 00:46:09.729330 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-27 00:46:09.734366 | orchestrator | 2025-03-27 00:46:09.734819 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:46:09.735149 | orchestrator | Thursday 27 March 2025 00:46:09 +0000 (0:00:00.446) 0:00:00.446 ******** 2025-03-27 00:46:09.962512 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-27 00:46:09.964118 | orchestrator | 2025-03-27 00:46:09.966702 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:46:09.967442 | orchestrator | Thursday 27 March 2025 00:46:09 +0000 (0:00:00.235) 0:00:00.681 ******** 2025-03-27 00:46:10.216627 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:10.216803 | orchestrator | 2025-03-27 00:46:10.216832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:10.217349 | orchestrator | Thursday 27 March 2025 00:46:10 +0000 (0:00:00.253) 0:00:00.935 ******** 2025-03-27 00:46:11.016714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-27 00:46:11.017152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-27 00:46:11.018749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-27 00:46:11.019736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-27 00:46:11.023123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-27 00:46:11.023692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-27 00:46:11.023721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-27 00:46:11.023737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-27 00:46:11.023757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-27 00:46:11.024501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-27 00:46:11.024851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-27 00:46:11.025866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-27 00:46:11.026705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-27 00:46:11.027371 | orchestrator | 2025-03-27 00:46:11.028113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:11.028664 | orchestrator | Thursday 27 March 2025 00:46:11 +0000 (0:00:00.801) 0:00:01.736 ******** 2025-03-27 00:46:11.228073 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:11.228931 | orchestrator | 2025-03-27 00:46:11.232026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:11.440567 | orchestrator | Thursday 27 March 2025 00:46:11 +0000 (0:00:00.209) 0:00:01.946 ******** 2025-03-27 00:46:11.440697 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:11.440976 | orchestrator | 2025-03-27 00:46:11.441927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:11.442891 | orchestrator | Thursday 27 March 2025 00:46:11 +0000 (0:00:00.213) 0:00:02.159 ******** 2025-03-27 00:46:11.658601 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:11.660118 | orchestrator | 2025-03-27 00:46:11.660234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:11.662723 | orchestrator | Thursday 27 March 2025 00:46:11 +0000 (0:00:00.212) 0:00:02.372 ******** 2025-03-27 00:46:11.869215 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:11.870252 | orchestrator | 2025-03-27 00:46:11.872187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:11.874857 | orchestrator | Thursday 27 March 2025 00:46:11 +0000 (0:00:00.216) 0:00:02.588 ******** 2025-03-27 00:46:12.090743 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:12.091358 | orchestrator | 2025-03-27 00:46:12.091395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:12.092246 | orchestrator | Thursday 27 March 2025 00:46:12 +0000 (0:00:00.220) 0:00:02.808 ******** 2025-03-27 00:46:12.309955 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:12.312344 | orchestrator | 2025-03-27 00:46:12.314081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:12.314744 | orchestrator | Thursday 27 March 2025 00:46:12 +0000 (0:00:00.217) 0:00:03.026 ******** 2025-03-27 00:46:12.518537 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:12.740781 | orchestrator | 2025-03-27 00:46:12.740843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:12.740859 | orchestrator | Thursday 27 March 2025 00:46:12 +0000 (0:00:00.210) 0:00:03.236 ******** 2025-03-27 00:46:12.740883 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:12.741733 | orchestrator | 2025-03-27 00:46:12.742893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:12.743341 | orchestrator | Thursday 27 March 2025 00:46:12 +0000 (0:00:00.222) 0:00:03.459 ******** 2025-03-27 00:46:13.397337 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21) 2025-03-27 00:46:13.397836 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21) 2025-03-27 00:46:13.398112 | orchestrator | 2025-03-27 00:46:13.398435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:13.398952 | orchestrator | Thursday 27 March 2025 00:46:13 +0000 (0:00:00.651) 0:00:04.111 ******** 2025-03-27 00:46:14.226079 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9) 2025-03-27 00:46:14.226405 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9) 2025-03-27 00:46:14.227514 | orchestrator | 2025-03-27 00:46:14.230158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:14.670320 | orchestrator | Thursday 27 March 2025 00:46:14 +0000 (0:00:00.832) 0:00:04.944 ******** 2025-03-27 00:46:14.670405 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7) 2025-03-27 00:46:14.671436 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7) 2025-03-27 00:46:14.671768 | orchestrator | 2025-03-27 00:46:14.672423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:14.673241 | orchestrator | Thursday 27 March 2025 00:46:14 +0000 (0:00:00.441) 0:00:05.386 ******** 2025-03-27 00:46:15.115518 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23) 2025-03-27 00:46:15.116026 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23) 2025-03-27 00:46:15.116151 | orchestrator | 2025-03-27 00:46:15.116336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:15.117298 | orchestrator | Thursday 27 March 2025 00:46:15 +0000 (0:00:00.448) 0:00:05.834 ******** 2025-03-27 00:46:15.498292 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:46:15.499094 | orchestrator | 2025-03-27 00:46:15.500020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:15.500792 | orchestrator | Thursday 27 March 2025 00:46:15 +0000 (0:00:00.380) 0:00:06.215 ******** 2025-03-27 00:46:16.006653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-27 00:46:16.007260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-27 00:46:16.008127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-27 00:46:16.009275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-27 00:46:16.010499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-27 00:46:16.011672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-27 00:46:16.012599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-27 00:46:16.013696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-27 00:46:16.014688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-27 00:46:16.015508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-27 00:46:16.016025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-27 00:46:16.016514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-27 00:46:16.017148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-27 00:46:16.017678 | orchestrator | 2025-03-27 00:46:16.018590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:16.018878 | orchestrator | Thursday 27 March 2025 00:46:16 +0000 (0:00:00.510) 0:00:06.726 ******** 2025-03-27 00:46:16.211516 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:16.212128 | orchestrator | 2025-03-27 00:46:16.214433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:16.215300 | orchestrator | Thursday 27 March 2025 00:46:16 +0000 (0:00:00.203) 0:00:06.930 ******** 2025-03-27 00:46:16.436015 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:16.437481 | orchestrator | 2025-03-27 00:46:16.438531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:16.439642 | orchestrator | Thursday 27 March 2025 00:46:16 +0000 (0:00:00.222) 0:00:07.153 ******** 2025-03-27 00:46:16.642537 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:16.642723 | orchestrator | 2025-03-27 00:46:16.642786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:16.643283 | orchestrator | Thursday 27 March 2025 00:46:16 +0000 (0:00:00.209) 0:00:07.363 ******** 2025-03-27 00:46:16.868250 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:16.872571 | orchestrator | 2025-03-27 00:46:16.873876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:16.873908 | orchestrator | Thursday 27 March 2025 00:46:16 +0000 (0:00:00.224) 0:00:07.587 ******** 2025-03-27 00:46:17.518418 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:17.518884 | orchestrator | 2025-03-27 00:46:17.522208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:17.731890 | orchestrator | Thursday 27 March 2025 00:46:17 +0000 (0:00:00.648) 0:00:08.236 ******** 2025-03-27 00:46:17.732016 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:17.732464 | orchestrator | 2025-03-27 00:46:17.733507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:17.736463 | orchestrator | Thursday 27 March 2025 00:46:17 +0000 (0:00:00.214) 0:00:08.451 ******** 2025-03-27 00:46:17.967513 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:17.968353 | orchestrator | 2025-03-27 00:46:17.971721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:18.187567 | orchestrator | Thursday 27 March 2025 00:46:17 +0000 (0:00:00.236) 0:00:08.687 ******** 2025-03-27 00:46:18.187679 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:18.188888 | orchestrator | 2025-03-27 00:46:18.189739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:18.192224 | orchestrator | Thursday 27 March 2025 00:46:18 +0000 (0:00:00.218) 0:00:08.906 ******** 2025-03-27 00:46:18.893217 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-27 00:46:18.894253 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-27 00:46:18.896191 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-27 00:46:18.897528 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-27 00:46:18.898355 | orchestrator | 2025-03-27 00:46:18.899315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:18.900017 | orchestrator | Thursday 27 March 2025 00:46:18 +0000 (0:00:00.704) 0:00:09.610 ******** 2025-03-27 00:46:19.108447 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:19.108659 | orchestrator | 2025-03-27 00:46:19.108685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:19.108707 | orchestrator | Thursday 27 March 2025 00:46:19 +0000 (0:00:00.214) 0:00:09.825 ******** 2025-03-27 00:46:19.325160 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:19.326288 | orchestrator | 2025-03-27 00:46:19.329441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:19.329796 | orchestrator | Thursday 27 March 2025 00:46:19 +0000 (0:00:00.218) 0:00:10.044 ******** 2025-03-27 00:46:19.534777 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:19.535720 | orchestrator | 2025-03-27 00:46:19.536515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:19.537840 | orchestrator | Thursday 27 March 2025 00:46:19 +0000 (0:00:00.209) 0:00:10.253 ******** 2025-03-27 00:46:19.759770 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:19.760216 | orchestrator | 2025-03-27 00:46:19.761530 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-27 00:46:19.762911 | orchestrator | Thursday 27 March 2025 00:46:19 +0000 (0:00:00.225) 0:00:10.479 ******** 2025-03-27 00:46:19.897415 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:19.898262 | orchestrator | 2025-03-27 00:46:19.899266 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-27 00:46:19.901895 | orchestrator | Thursday 27 March 2025 00:46:19 +0000 (0:00:00.137) 0:00:10.617 ******** 2025-03-27 00:46:20.129235 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5e2bf155-ac50-562d-a3fc-a4d9038fe730'}}) 2025-03-27 00:46:20.129600 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd321ea45-1a00-5698-8092-45c793cb3b8c'}}) 2025-03-27 00:46:20.132664 | orchestrator | 2025-03-27 00:46:20.133313 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-27 00:46:20.133716 | orchestrator | Thursday 27 March 2025 00:46:20 +0000 (0:00:00.230) 0:00:10.848 ******** 2025-03-27 00:46:22.515238 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'}) 2025-03-27 00:46:22.515801 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'}) 2025-03-27 00:46:22.518484 | orchestrator | 2025-03-27 00:46:22.519664 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-27 00:46:22.520347 | orchestrator | Thursday 27 March 2025 00:46:22 +0000 (0:00:02.384) 0:00:13.232 ******** 2025-03-27 00:46:22.690665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:22.692277 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:22.692638 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:22.695761 | orchestrator | 2025-03-27 00:46:22.697348 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-27 00:46:22.697384 | orchestrator | Thursday 27 March 2025 00:46:22 +0000 (0:00:00.177) 0:00:13.410 ******** 2025-03-27 00:46:24.265957 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'}) 2025-03-27 00:46:24.268259 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'}) 2025-03-27 00:46:24.268337 | orchestrator | 2025-03-27 00:46:24.268970 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-27 00:46:24.269762 | orchestrator | Thursday 27 March 2025 00:46:24 +0000 (0:00:01.573) 0:00:14.984 ******** 2025-03-27 00:46:24.457470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:24.458299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:24.459345 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:24.459864 | orchestrator | 2025-03-27 00:46:24.460764 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-27 00:46:24.461434 | orchestrator | Thursday 27 March 2025 00:46:24 +0000 (0:00:00.192) 0:00:15.176 ******** 2025-03-27 00:46:24.631687 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:24.631883 | orchestrator | 2025-03-27 00:46:24.632691 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-27 00:46:24.633982 | orchestrator | Thursday 27 March 2025 00:46:24 +0000 (0:00:00.172) 0:00:15.348 ******** 2025-03-27 00:46:24.832720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:24.832952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:24.833446 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:24.833479 | orchestrator | 2025-03-27 00:46:24.834276 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-27 00:46:24.834375 | orchestrator | Thursday 27 March 2025 00:46:24 +0000 (0:00:00.204) 0:00:15.552 ******** 2025-03-27 00:46:24.984622 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:24.985245 | orchestrator | 2025-03-27 00:46:24.987661 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-27 00:46:24.988101 | orchestrator | Thursday 27 March 2025 00:46:24 +0000 (0:00:00.147) 0:00:15.700 ******** 2025-03-27 00:46:25.171543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:25.173099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:25.174911 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:25.176354 | orchestrator | 2025-03-27 00:46:25.178120 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-27 00:46:25.179984 | orchestrator | Thursday 27 March 2025 00:46:25 +0000 (0:00:00.189) 0:00:15.890 ******** 2025-03-27 00:46:25.497067 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:25.497505 | orchestrator | 2025-03-27 00:46:25.497541 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-27 00:46:25.498690 | orchestrator | Thursday 27 March 2025 00:46:25 +0000 (0:00:00.321) 0:00:16.211 ******** 2025-03-27 00:46:25.666270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:25.666953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:25.667935 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:25.669914 | orchestrator | 2025-03-27 00:46:25.670242 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-27 00:46:25.672239 | orchestrator | Thursday 27 March 2025 00:46:25 +0000 (0:00:00.172) 0:00:16.384 ******** 2025-03-27 00:46:25.851773 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:25.852935 | orchestrator | 2025-03-27 00:46:25.853874 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-27 00:46:25.855284 | orchestrator | Thursday 27 March 2025 00:46:25 +0000 (0:00:00.187) 0:00:16.571 ******** 2025-03-27 00:46:26.047774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:26.049466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:26.050523 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.051367 | orchestrator | 2025-03-27 00:46:26.051459 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-27 00:46:26.052632 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.196) 0:00:16.767 ******** 2025-03-27 00:46:26.250553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:26.251432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:26.251468 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.253090 | orchestrator | 2025-03-27 00:46:26.253505 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-27 00:46:26.254854 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.201) 0:00:16.969 ******** 2025-03-27 00:46:26.465665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:26.466125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:26.467145 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.468691 | orchestrator | 2025-03-27 00:46:26.471052 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-27 00:46:26.625798 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.216) 0:00:17.186 ******** 2025-03-27 00:46:26.625842 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.626608 | orchestrator | 2025-03-27 00:46:26.628620 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-27 00:46:26.786735 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.158) 0:00:17.344 ******** 2025-03-27 00:46:26.786776 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.787355 | orchestrator | 2025-03-27 00:46:26.788427 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-27 00:46:26.789115 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.160) 0:00:17.504 ******** 2025-03-27 00:46:26.945736 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:26.946361 | orchestrator | 2025-03-27 00:46:26.948479 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-27 00:46:26.949460 | orchestrator | Thursday 27 March 2025 00:46:26 +0000 (0:00:00.160) 0:00:17.665 ******** 2025-03-27 00:46:27.122824 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:46:27.123659 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-27 00:46:27.124871 | orchestrator | } 2025-03-27 00:46:27.125766 | orchestrator | 2025-03-27 00:46:27.127499 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-27 00:46:27.127957 | orchestrator | Thursday 27 March 2025 00:46:27 +0000 (0:00:00.176) 0:00:17.842 ******** 2025-03-27 00:46:27.281394 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:46:27.281648 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-27 00:46:27.282527 | orchestrator | } 2025-03-27 00:46:27.283288 | orchestrator | 2025-03-27 00:46:27.283787 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-27 00:46:27.284474 | orchestrator | Thursday 27 March 2025 00:46:27 +0000 (0:00:00.158) 0:00:18.000 ******** 2025-03-27 00:46:27.436655 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:46:27.437084 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-27 00:46:27.438588 | orchestrator | } 2025-03-27 00:46:27.439305 | orchestrator | 2025-03-27 00:46:27.441931 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-27 00:46:27.442362 | orchestrator | Thursday 27 March 2025 00:46:27 +0000 (0:00:00.155) 0:00:18.156 ******** 2025-03-27 00:46:28.395163 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:28.396094 | orchestrator | 2025-03-27 00:46:28.396201 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-27 00:46:28.397491 | orchestrator | Thursday 27 March 2025 00:46:28 +0000 (0:00:00.957) 0:00:19.113 ******** 2025-03-27 00:46:28.954880 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:28.957706 | orchestrator | 2025-03-27 00:46:28.957785 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-27 00:46:28.957808 | orchestrator | Thursday 27 March 2025 00:46:28 +0000 (0:00:00.559) 0:00:19.672 ******** 2025-03-27 00:46:29.489018 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:29.491531 | orchestrator | 2025-03-27 00:46:29.675857 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-27 00:46:29.675978 | orchestrator | Thursday 27 March 2025 00:46:29 +0000 (0:00:00.534) 0:00:20.207 ******** 2025-03-27 00:46:29.676015 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:29.676249 | orchestrator | 2025-03-27 00:46:29.676734 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-27 00:46:29.677247 | orchestrator | Thursday 27 March 2025 00:46:29 +0000 (0:00:00.186) 0:00:20.394 ******** 2025-03-27 00:46:29.782153 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:29.783319 | orchestrator | 2025-03-27 00:46:29.784058 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-27 00:46:29.785098 | orchestrator | Thursday 27 March 2025 00:46:29 +0000 (0:00:00.107) 0:00:20.502 ******** 2025-03-27 00:46:29.921377 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:29.921661 | orchestrator | 2025-03-27 00:46:29.922431 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-27 00:46:29.922924 | orchestrator | Thursday 27 March 2025 00:46:29 +0000 (0:00:00.139) 0:00:20.641 ******** 2025-03-27 00:46:30.062509 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:46:30.063779 | orchestrator |  "vgs_report": { 2025-03-27 00:46:30.065284 | orchestrator |  "vg": [] 2025-03-27 00:46:30.067139 | orchestrator |  } 2025-03-27 00:46:30.067432 | orchestrator | } 2025-03-27 00:46:30.068766 | orchestrator | 2025-03-27 00:46:30.069597 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-27 00:46:30.069948 | orchestrator | Thursday 27 March 2025 00:46:30 +0000 (0:00:00.140) 0:00:20.782 ******** 2025-03-27 00:46:30.263706 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:30.264663 | orchestrator | 2025-03-27 00:46:30.264772 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-27 00:46:30.265721 | orchestrator | Thursday 27 March 2025 00:46:30 +0000 (0:00:00.201) 0:00:20.984 ******** 2025-03-27 00:46:30.437375 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:30.440926 | orchestrator | 2025-03-27 00:46:30.441551 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-27 00:46:30.441757 | orchestrator | Thursday 27 March 2025 00:46:30 +0000 (0:00:00.171) 0:00:21.155 ******** 2025-03-27 00:46:30.585310 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:30.587579 | orchestrator | 2025-03-27 00:46:30.952081 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-27 00:46:30.952157 | orchestrator | Thursday 27 March 2025 00:46:30 +0000 (0:00:00.150) 0:00:21.305 ******** 2025-03-27 00:46:30.952218 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:30.953343 | orchestrator | 2025-03-27 00:46:30.953799 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-27 00:46:30.954371 | orchestrator | Thursday 27 March 2025 00:46:30 +0000 (0:00:00.364) 0:00:21.670 ******** 2025-03-27 00:46:31.140256 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.140441 | orchestrator | 2025-03-27 00:46:31.141215 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-27 00:46:31.141705 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.189) 0:00:21.860 ******** 2025-03-27 00:46:31.286300 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.287599 | orchestrator | 2025-03-27 00:46:31.289763 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-27 00:46:31.290450 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.145) 0:00:22.005 ******** 2025-03-27 00:46:31.440730 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.442822 | orchestrator | 2025-03-27 00:46:31.443342 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-27 00:46:31.443625 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.154) 0:00:22.160 ******** 2025-03-27 00:46:31.611575 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.612347 | orchestrator | 2025-03-27 00:46:31.613404 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-27 00:46:31.613843 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.172) 0:00:22.332 ******** 2025-03-27 00:46:31.774738 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.775144 | orchestrator | 2025-03-27 00:46:31.776531 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-27 00:46:31.776829 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.158) 0:00:22.490 ******** 2025-03-27 00:46:31.915677 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:31.916452 | orchestrator | 2025-03-27 00:46:31.916482 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-27 00:46:31.916506 | orchestrator | Thursday 27 March 2025 00:46:31 +0000 (0:00:00.142) 0:00:22.633 ******** 2025-03-27 00:46:32.049383 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:32.050797 | orchestrator | 2025-03-27 00:46:32.051581 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-27 00:46:32.052155 | orchestrator | Thursday 27 March 2025 00:46:32 +0000 (0:00:00.134) 0:00:22.768 ******** 2025-03-27 00:46:32.192023 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:32.193298 | orchestrator | 2025-03-27 00:46:32.194417 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-27 00:46:32.194857 | orchestrator | Thursday 27 March 2025 00:46:32 +0000 (0:00:00.140) 0:00:22.909 ******** 2025-03-27 00:46:32.344320 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:32.345914 | orchestrator | 2025-03-27 00:46:32.347240 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-27 00:46:32.348148 | orchestrator | Thursday 27 March 2025 00:46:32 +0000 (0:00:00.153) 0:00:23.063 ******** 2025-03-27 00:46:32.521460 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:32.521997 | orchestrator | 2025-03-27 00:46:32.527517 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-27 00:46:32.529104 | orchestrator | Thursday 27 March 2025 00:46:32 +0000 (0:00:00.177) 0:00:23.240 ******** 2025-03-27 00:46:32.732116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:32.732876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:32.733594 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:32.734894 | orchestrator | 2025-03-27 00:46:32.735329 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-27 00:46:32.736074 | orchestrator | Thursday 27 March 2025 00:46:32 +0000 (0:00:00.209) 0:00:23.450 ******** 2025-03-27 00:46:33.130283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:33.130709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:33.132203 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:33.132534 | orchestrator | 2025-03-27 00:46:33.133454 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-27 00:46:33.135263 | orchestrator | Thursday 27 March 2025 00:46:33 +0000 (0:00:00.399) 0:00:23.850 ******** 2025-03-27 00:46:33.362282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:33.362427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:33.363071 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:33.363783 | orchestrator | 2025-03-27 00:46:33.366690 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-27 00:46:33.529135 | orchestrator | Thursday 27 March 2025 00:46:33 +0000 (0:00:00.230) 0:00:24.080 ******** 2025-03-27 00:46:33.529207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:33.530395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:33.531462 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:33.532294 | orchestrator | 2025-03-27 00:46:33.535430 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-27 00:46:33.535584 | orchestrator | Thursday 27 March 2025 00:46:33 +0000 (0:00:00.168) 0:00:24.249 ******** 2025-03-27 00:46:33.717119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:33.718766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:33.718807 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:33.721819 | orchestrator | 2025-03-27 00:46:33.721851 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-27 00:46:33.885388 | orchestrator | Thursday 27 March 2025 00:46:33 +0000 (0:00:00.186) 0:00:24.436 ******** 2025-03-27 00:46:33.885476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:33.886013 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:33.887222 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:33.887549 | orchestrator | 2025-03-27 00:46:33.890253 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-27 00:46:34.068891 | orchestrator | Thursday 27 March 2025 00:46:33 +0000 (0:00:00.168) 0:00:24.605 ******** 2025-03-27 00:46:34.068954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:34.069360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:34.070123 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:34.070764 | orchestrator | 2025-03-27 00:46:34.071339 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-27 00:46:34.072351 | orchestrator | Thursday 27 March 2025 00:46:34 +0000 (0:00:00.182) 0:00:24.787 ******** 2025-03-27 00:46:34.254720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:34.256055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:34.257320 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:34.258321 | orchestrator | 2025-03-27 00:46:34.259552 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-27 00:46:34.260414 | orchestrator | Thursday 27 March 2025 00:46:34 +0000 (0:00:00.186) 0:00:24.974 ******** 2025-03-27 00:46:34.829226 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:34.831252 | orchestrator | 2025-03-27 00:46:34.832978 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-27 00:46:34.833433 | orchestrator | Thursday 27 March 2025 00:46:34 +0000 (0:00:00.571) 0:00:25.546 ******** 2025-03-27 00:46:35.386542 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:35.387673 | orchestrator | 2025-03-27 00:46:35.388301 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-27 00:46:35.389296 | orchestrator | Thursday 27 March 2025 00:46:35 +0000 (0:00:00.560) 0:00:26.106 ******** 2025-03-27 00:46:35.550486 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:46:35.550866 | orchestrator | 2025-03-27 00:46:35.554326 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-27 00:46:35.753731 | orchestrator | Thursday 27 March 2025 00:46:35 +0000 (0:00:00.161) 0:00:26.268 ******** 2025-03-27 00:46:35.753825 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'vg_name': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'}) 2025-03-27 00:46:35.754576 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'vg_name': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'}) 2025-03-27 00:46:35.754611 | orchestrator | 2025-03-27 00:46:35.755494 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-27 00:46:35.757274 | orchestrator | Thursday 27 March 2025 00:46:35 +0000 (0:00:00.204) 0:00:26.473 ******** 2025-03-27 00:46:36.175272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:36.176544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:36.179689 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:36.180107 | orchestrator | 2025-03-27 00:46:36.180135 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-27 00:46:36.180155 | orchestrator | Thursday 27 March 2025 00:46:36 +0000 (0:00:00.421) 0:00:26.894 ******** 2025-03-27 00:46:36.367483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:36.367854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:36.369028 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:36.370197 | orchestrator | 2025-03-27 00:46:36.371730 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-27 00:46:36.372379 | orchestrator | Thursday 27 March 2025 00:46:36 +0000 (0:00:00.191) 0:00:27.086 ******** 2025-03-27 00:46:36.574625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'})  2025-03-27 00:46:36.575813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'})  2025-03-27 00:46:36.576959 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:46:36.579089 | orchestrator | 2025-03-27 00:46:36.581668 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-27 00:46:37.331416 | orchestrator | Thursday 27 March 2025 00:46:36 +0000 (0:00:00.207) 0:00:27.293 ******** 2025-03-27 00:46:37.331548 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 00:46:37.331615 | orchestrator |  "lvm_report": { 2025-03-27 00:46:37.331785 | orchestrator |  "lv": [ 2025-03-27 00:46:37.332413 | orchestrator |  { 2025-03-27 00:46:37.332803 | orchestrator |  "lv_name": "osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730", 2025-03-27 00:46:37.333491 | orchestrator |  "vg_name": "ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730" 2025-03-27 00:46:37.334277 | orchestrator |  }, 2025-03-27 00:46:37.334605 | orchestrator |  { 2025-03-27 00:46:37.335382 | orchestrator |  "lv_name": "osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c", 2025-03-27 00:46:37.335962 | orchestrator |  "vg_name": "ceph-d321ea45-1a00-5698-8092-45c793cb3b8c" 2025-03-27 00:46:37.336021 | orchestrator |  } 2025-03-27 00:46:37.336502 | orchestrator |  ], 2025-03-27 00:46:37.337384 | orchestrator |  "pv": [ 2025-03-27 00:46:37.338375 | orchestrator |  { 2025-03-27 00:46:37.338676 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-27 00:46:37.339468 | orchestrator |  "vg_name": "ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730" 2025-03-27 00:46:37.339740 | orchestrator |  }, 2025-03-27 00:46:37.339990 | orchestrator |  { 2025-03-27 00:46:37.340355 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-27 00:46:37.340456 | orchestrator |  "vg_name": "ceph-d321ea45-1a00-5698-8092-45c793cb3b8c" 2025-03-27 00:46:37.340756 | orchestrator |  } 2025-03-27 00:46:37.341196 | orchestrator |  ] 2025-03-27 00:46:37.341426 | orchestrator |  } 2025-03-27 00:46:37.344339 | orchestrator | } 2025-03-27 00:46:37.344917 | orchestrator | 2025-03-27 00:46:37.345479 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-27 00:46:37.345989 | orchestrator | 2025-03-27 00:46:37.346338 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:46:37.347130 | orchestrator | Thursday 27 March 2025 00:46:37 +0000 (0:00:00.755) 0:00:28.049 ******** 2025-03-27 00:46:37.831128 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-27 00:46:37.831626 | orchestrator | 2025-03-27 00:46:37.834152 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:46:38.077769 | orchestrator | Thursday 27 March 2025 00:46:37 +0000 (0:00:00.499) 0:00:28.548 ******** 2025-03-27 00:46:38.077829 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:38.078893 | orchestrator | 2025-03-27 00:46:38.080327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:38.080851 | orchestrator | Thursday 27 March 2025 00:46:38 +0000 (0:00:00.248) 0:00:28.797 ******** 2025-03-27 00:46:38.598435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-27 00:46:38.600507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-27 00:46:38.600530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-27 00:46:38.600538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-27 00:46:38.600578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-27 00:46:38.600592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-27 00:46:38.601963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-27 00:46:38.603523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-27 00:46:38.604010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-27 00:46:38.604309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-27 00:46:38.604933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-27 00:46:38.605273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-27 00:46:38.606960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-27 00:46:38.607167 | orchestrator | 2025-03-27 00:46:38.607220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:38.607240 | orchestrator | Thursday 27 March 2025 00:46:38 +0000 (0:00:00.519) 0:00:29.317 ******** 2025-03-27 00:46:38.831581 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:38.831759 | orchestrator | 2025-03-27 00:46:38.832128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:38.832534 | orchestrator | Thursday 27 March 2025 00:46:38 +0000 (0:00:00.233) 0:00:29.550 ******** 2025-03-27 00:46:39.045478 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:39.046234 | orchestrator | 2025-03-27 00:46:39.046769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:39.047109 | orchestrator | Thursday 27 March 2025 00:46:39 +0000 (0:00:00.213) 0:00:29.764 ******** 2025-03-27 00:46:39.270689 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:39.270881 | orchestrator | 2025-03-27 00:46:39.271694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:39.271946 | orchestrator | Thursday 27 March 2025 00:46:39 +0000 (0:00:00.225) 0:00:29.989 ******** 2025-03-27 00:46:39.475444 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:39.475581 | orchestrator | 2025-03-27 00:46:39.476336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:39.476922 | orchestrator | Thursday 27 March 2025 00:46:39 +0000 (0:00:00.203) 0:00:30.193 ******** 2025-03-27 00:46:39.677269 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:39.677492 | orchestrator | 2025-03-27 00:46:39.678673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:39.679532 | orchestrator | Thursday 27 March 2025 00:46:39 +0000 (0:00:00.203) 0:00:30.397 ******** 2025-03-27 00:46:39.908510 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:39.908677 | orchestrator | 2025-03-27 00:46:39.909956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:39.910982 | orchestrator | Thursday 27 March 2025 00:46:39 +0000 (0:00:00.228) 0:00:30.625 ******** 2025-03-27 00:46:40.322090 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:40.322984 | orchestrator | 2025-03-27 00:46:40.323926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:40.326459 | orchestrator | Thursday 27 March 2025 00:46:40 +0000 (0:00:00.415) 0:00:31.041 ******** 2025-03-27 00:46:40.523606 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:40.524312 | orchestrator | 2025-03-27 00:46:40.525260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:40.528481 | orchestrator | Thursday 27 March 2025 00:46:40 +0000 (0:00:00.201) 0:00:31.243 ******** 2025-03-27 00:46:40.985135 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666) 2025-03-27 00:46:40.985338 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666) 2025-03-27 00:46:40.985842 | orchestrator | 2025-03-27 00:46:40.988907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:41.436881 | orchestrator | Thursday 27 March 2025 00:46:40 +0000 (0:00:00.460) 0:00:31.704 ******** 2025-03-27 00:46:41.437007 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8) 2025-03-27 00:46:41.437733 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8) 2025-03-27 00:46:41.437768 | orchestrator | 2025-03-27 00:46:41.438522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:41.439375 | orchestrator | Thursday 27 March 2025 00:46:41 +0000 (0:00:00.452) 0:00:32.156 ******** 2025-03-27 00:46:41.917124 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a) 2025-03-27 00:46:41.917501 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a) 2025-03-27 00:46:41.918157 | orchestrator | 2025-03-27 00:46:41.918241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:41.918600 | orchestrator | Thursday 27 March 2025 00:46:41 +0000 (0:00:00.479) 0:00:32.635 ******** 2025-03-27 00:46:42.421305 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10) 2025-03-27 00:46:42.422006 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10) 2025-03-27 00:46:42.422099 | orchestrator | 2025-03-27 00:46:42.422234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:46:42.422773 | orchestrator | Thursday 27 March 2025 00:46:42 +0000 (0:00:00.505) 0:00:33.141 ******** 2025-03-27 00:46:42.788107 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:46:42.792099 | orchestrator | 2025-03-27 00:46:42.793324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:42.794551 | orchestrator | Thursday 27 March 2025 00:46:42 +0000 (0:00:00.360) 0:00:33.501 ******** 2025-03-27 00:46:43.297551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-27 00:46:43.300744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-27 00:46:43.300792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-27 00:46:43.303584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-27 00:46:43.303634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-27 00:46:43.304885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-27 00:46:43.305840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-27 00:46:43.306261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-27 00:46:43.306841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-27 00:46:43.307373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-27 00:46:43.307837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-27 00:46:43.308163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-27 00:46:43.310396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-27 00:46:43.310466 | orchestrator | 2025-03-27 00:46:43.310484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:43.310503 | orchestrator | Thursday 27 March 2025 00:46:43 +0000 (0:00:00.512) 0:00:34.014 ******** 2025-03-27 00:46:43.522514 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:43.523596 | orchestrator | 2025-03-27 00:46:43.524679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:43.525548 | orchestrator | Thursday 27 March 2025 00:46:43 +0000 (0:00:00.227) 0:00:34.242 ******** 2025-03-27 00:46:43.771226 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:43.771402 | orchestrator | 2025-03-27 00:46:43.772676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:43.773311 | orchestrator | Thursday 27 March 2025 00:46:43 +0000 (0:00:00.247) 0:00:34.489 ******** 2025-03-27 00:46:44.286979 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:44.288167 | orchestrator | 2025-03-27 00:46:44.288247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:44.288716 | orchestrator | Thursday 27 March 2025 00:46:44 +0000 (0:00:00.515) 0:00:35.005 ******** 2025-03-27 00:46:44.507754 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:44.508780 | orchestrator | 2025-03-27 00:46:44.510093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:44.511020 | orchestrator | Thursday 27 March 2025 00:46:44 +0000 (0:00:00.222) 0:00:35.227 ******** 2025-03-27 00:46:44.734454 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:44.734966 | orchestrator | 2025-03-27 00:46:44.734996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:44.735016 | orchestrator | Thursday 27 March 2025 00:46:44 +0000 (0:00:00.226) 0:00:35.453 ******** 2025-03-27 00:46:44.952285 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:44.952630 | orchestrator | 2025-03-27 00:46:44.953148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:44.953201 | orchestrator | Thursday 27 March 2025 00:46:44 +0000 (0:00:00.218) 0:00:35.671 ******** 2025-03-27 00:46:45.168459 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:45.168714 | orchestrator | 2025-03-27 00:46:45.168743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:45.168765 | orchestrator | Thursday 27 March 2025 00:46:45 +0000 (0:00:00.216) 0:00:35.888 ******** 2025-03-27 00:46:45.380422 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:45.380879 | orchestrator | 2025-03-27 00:46:45.381648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:45.382218 | orchestrator | Thursday 27 March 2025 00:46:45 +0000 (0:00:00.209) 0:00:36.098 ******** 2025-03-27 00:46:46.095606 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-27 00:46:46.095764 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-27 00:46:46.096106 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-27 00:46:46.096588 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-27 00:46:46.096768 | orchestrator | 2025-03-27 00:46:46.097230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:46.097550 | orchestrator | Thursday 27 March 2025 00:46:46 +0000 (0:00:00.717) 0:00:36.815 ******** 2025-03-27 00:46:46.336633 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:46.539207 | orchestrator | 2025-03-27 00:46:46.539285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:46.539300 | orchestrator | Thursday 27 March 2025 00:46:46 +0000 (0:00:00.241) 0:00:37.057 ******** 2025-03-27 00:46:46.539324 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:46.539972 | orchestrator | 2025-03-27 00:46:46.541105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:46.541594 | orchestrator | Thursday 27 March 2025 00:46:46 +0000 (0:00:00.202) 0:00:37.259 ******** 2025-03-27 00:46:46.778388 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:46.781395 | orchestrator | 2025-03-27 00:46:46.790294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:46:47.487848 | orchestrator | Thursday 27 March 2025 00:46:46 +0000 (0:00:00.234) 0:00:37.493 ******** 2025-03-27 00:46:47.487995 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:47.488439 | orchestrator | 2025-03-27 00:46:47.489434 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-27 00:46:47.490501 | orchestrator | Thursday 27 March 2025 00:46:47 +0000 (0:00:00.713) 0:00:38.207 ******** 2025-03-27 00:46:47.634857 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:47.635009 | orchestrator | 2025-03-27 00:46:47.636093 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-27 00:46:47.636722 | orchestrator | Thursday 27 March 2025 00:46:47 +0000 (0:00:00.147) 0:00:38.355 ******** 2025-03-27 00:46:47.865739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bac76156-9f65-5e37-8447-16c40269f5cf'}}) 2025-03-27 00:46:47.866290 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}}) 2025-03-27 00:46:47.866684 | orchestrator | 2025-03-27 00:46:47.867594 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-27 00:46:47.870080 | orchestrator | Thursday 27 March 2025 00:46:47 +0000 (0:00:00.230) 0:00:38.585 ******** 2025-03-27 00:46:49.898160 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'}) 2025-03-27 00:46:49.898359 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}) 2025-03-27 00:46:49.899710 | orchestrator | 2025-03-27 00:46:49.900673 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-27 00:46:49.906308 | orchestrator | Thursday 27 March 2025 00:46:49 +0000 (0:00:02.030) 0:00:40.615 ******** 2025-03-27 00:46:50.111237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:50.111882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:50.113431 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:50.114805 | orchestrator | 2025-03-27 00:46:50.115598 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-27 00:46:50.116250 | orchestrator | Thursday 27 March 2025 00:46:50 +0000 (0:00:00.214) 0:00:40.830 ******** 2025-03-27 00:46:51.553332 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'}) 2025-03-27 00:46:51.556036 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}) 2025-03-27 00:46:51.556747 | orchestrator | 2025-03-27 00:46:51.558437 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-27 00:46:51.559156 | orchestrator | Thursday 27 March 2025 00:46:51 +0000 (0:00:01.440) 0:00:42.271 ******** 2025-03-27 00:46:51.717854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:51.718526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:51.719361 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:51.720103 | orchestrator | 2025-03-27 00:46:51.723353 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-27 00:46:51.723420 | orchestrator | Thursday 27 March 2025 00:46:51 +0000 (0:00:00.166) 0:00:42.437 ******** 2025-03-27 00:46:51.884472 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:51.884838 | orchestrator | 2025-03-27 00:46:51.885952 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-27 00:46:51.886342 | orchestrator | Thursday 27 March 2025 00:46:51 +0000 (0:00:00.165) 0:00:42.603 ******** 2025-03-27 00:46:52.061981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:52.063322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:52.063654 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:52.064851 | orchestrator | 2025-03-27 00:46:52.065509 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-27 00:46:52.066288 | orchestrator | Thursday 27 March 2025 00:46:52 +0000 (0:00:00.176) 0:00:42.780 ******** 2025-03-27 00:46:52.420018 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:52.420306 | orchestrator | 2025-03-27 00:46:52.421012 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-27 00:46:52.421671 | orchestrator | Thursday 27 March 2025 00:46:52 +0000 (0:00:00.358) 0:00:43.138 ******** 2025-03-27 00:46:52.620034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:52.620276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:52.622094 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:52.622685 | orchestrator | 2025-03-27 00:46:52.623522 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-27 00:46:52.624263 | orchestrator | Thursday 27 March 2025 00:46:52 +0000 (0:00:00.201) 0:00:43.340 ******** 2025-03-27 00:46:52.767533 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:52.941450 | orchestrator | 2025-03-27 00:46:52.941536 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-27 00:46:52.941552 | orchestrator | Thursday 27 March 2025 00:46:52 +0000 (0:00:00.144) 0:00:43.484 ******** 2025-03-27 00:46:52.941579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:52.942162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:52.942635 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:52.943030 | orchestrator | 2025-03-27 00:46:52.943890 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-27 00:46:52.944350 | orchestrator | Thursday 27 March 2025 00:46:52 +0000 (0:00:00.176) 0:00:43.661 ******** 2025-03-27 00:46:53.107925 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:53.109032 | orchestrator | 2025-03-27 00:46:53.109640 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-27 00:46:53.110474 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.165) 0:00:43.826 ******** 2025-03-27 00:46:53.283161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:53.283759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:53.284889 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:53.285340 | orchestrator | 2025-03-27 00:46:53.286219 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-27 00:46:53.286327 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.175) 0:00:44.002 ******** 2025-03-27 00:46:53.484367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:53.486253 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:53.487436 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:53.488866 | orchestrator | 2025-03-27 00:46:53.488942 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-27 00:46:53.489943 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.201) 0:00:44.204 ******** 2025-03-27 00:46:53.658291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:46:53.658961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:46:53.660385 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:53.661478 | orchestrator | 2025-03-27 00:46:53.662343 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-27 00:46:53.662899 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.171) 0:00:44.376 ******** 2025-03-27 00:46:53.824032 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:53.824927 | orchestrator | 2025-03-27 00:46:53.825672 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-27 00:46:53.826765 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.167) 0:00:44.544 ******** 2025-03-27 00:46:53.970156 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:53.970940 | orchestrator | 2025-03-27 00:46:53.972278 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-27 00:46:53.975874 | orchestrator | Thursday 27 March 2025 00:46:53 +0000 (0:00:00.145) 0:00:44.689 ******** 2025-03-27 00:46:54.126778 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:54.129479 | orchestrator | 2025-03-27 00:46:54.130092 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-27 00:46:54.131850 | orchestrator | Thursday 27 March 2025 00:46:54 +0000 (0:00:00.154) 0:00:44.844 ******** 2025-03-27 00:46:54.520304 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:46:54.521761 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-27 00:46:54.525211 | orchestrator | } 2025-03-27 00:46:54.526442 | orchestrator | 2025-03-27 00:46:54.526472 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-27 00:46:54.526493 | orchestrator | Thursday 27 March 2025 00:46:54 +0000 (0:00:00.394) 0:00:45.239 ******** 2025-03-27 00:46:54.674575 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:46:54.675541 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-27 00:46:54.676363 | orchestrator | } 2025-03-27 00:46:54.679092 | orchestrator | 2025-03-27 00:46:54.870236 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-27 00:46:54.870308 | orchestrator | Thursday 27 March 2025 00:46:54 +0000 (0:00:00.154) 0:00:45.394 ******** 2025-03-27 00:46:54.870334 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:46:54.871483 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-27 00:46:54.873233 | orchestrator | } 2025-03-27 00:46:54.874085 | orchestrator | 2025-03-27 00:46:54.876359 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-27 00:46:54.876703 | orchestrator | Thursday 27 March 2025 00:46:54 +0000 (0:00:00.195) 0:00:45.589 ******** 2025-03-27 00:46:55.450968 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:55.452721 | orchestrator | 2025-03-27 00:46:55.452763 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-27 00:46:56.025451 | orchestrator | Thursday 27 March 2025 00:46:55 +0000 (0:00:00.578) 0:00:46.168 ******** 2025-03-27 00:46:56.025573 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:56.025644 | orchestrator | 2025-03-27 00:46:56.026109 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-27 00:46:56.026896 | orchestrator | Thursday 27 March 2025 00:46:56 +0000 (0:00:00.574) 0:00:46.742 ******** 2025-03-27 00:46:56.616614 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:56.617383 | orchestrator | 2025-03-27 00:46:56.617420 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-27 00:46:56.619875 | orchestrator | Thursday 27 March 2025 00:46:56 +0000 (0:00:00.591) 0:00:47.334 ******** 2025-03-27 00:46:56.770619 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:46:56.771247 | orchestrator | 2025-03-27 00:46:56.772722 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-27 00:46:56.774134 | orchestrator | Thursday 27 March 2025 00:46:56 +0000 (0:00:00.156) 0:00:47.490 ******** 2025-03-27 00:46:56.901401 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:56.902470 | orchestrator | 2025-03-27 00:46:56.905815 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-27 00:46:56.909583 | orchestrator | Thursday 27 March 2025 00:46:56 +0000 (0:00:00.127) 0:00:47.618 ******** 2025-03-27 00:46:57.047084 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:57.047647 | orchestrator | 2025-03-27 00:46:57.050201 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-27 00:46:57.229740 | orchestrator | Thursday 27 March 2025 00:46:57 +0000 (0:00:00.146) 0:00:47.765 ******** 2025-03-27 00:46:57.229854 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:46:57.231377 | orchestrator |  "vgs_report": { 2025-03-27 00:46:57.231440 | orchestrator |  "vg": [] 2025-03-27 00:46:57.233776 | orchestrator |  } 2025-03-27 00:46:57.233951 | orchestrator | } 2025-03-27 00:46:57.235168 | orchestrator | 2025-03-27 00:46:57.235649 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-27 00:46:57.236420 | orchestrator | Thursday 27 March 2025 00:46:57 +0000 (0:00:00.181) 0:00:47.946 ******** 2025-03-27 00:46:57.377417 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:57.378516 | orchestrator | 2025-03-27 00:46:57.379640 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-27 00:46:57.380566 | orchestrator | Thursday 27 March 2025 00:46:57 +0000 (0:00:00.150) 0:00:48.096 ******** 2025-03-27 00:46:57.765515 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:57.766394 | orchestrator | 2025-03-27 00:46:57.767120 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-27 00:46:57.768079 | orchestrator | Thursday 27 March 2025 00:46:57 +0000 (0:00:00.386) 0:00:48.484 ******** 2025-03-27 00:46:57.923575 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:57.925114 | orchestrator | 2025-03-27 00:46:57.926654 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-27 00:46:57.927296 | orchestrator | Thursday 27 March 2025 00:46:57 +0000 (0:00:00.158) 0:00:48.642 ******** 2025-03-27 00:46:58.082924 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.083582 | orchestrator | 2025-03-27 00:46:58.083784 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-27 00:46:58.084110 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.160) 0:00:48.803 ******** 2025-03-27 00:46:58.244166 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.246679 | orchestrator | 2025-03-27 00:46:58.248073 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-27 00:46:58.248373 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.159) 0:00:48.962 ******** 2025-03-27 00:46:58.410633 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.413060 | orchestrator | 2025-03-27 00:46:58.413972 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-27 00:46:58.414007 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.167) 0:00:49.130 ******** 2025-03-27 00:46:58.574065 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.575407 | orchestrator | 2025-03-27 00:46:58.576299 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-27 00:46:58.577074 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.163) 0:00:49.293 ******** 2025-03-27 00:46:58.737554 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.738211 | orchestrator | 2025-03-27 00:46:58.738892 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-27 00:46:58.740148 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.163) 0:00:49.456 ******** 2025-03-27 00:46:58.916522 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:58.917958 | orchestrator | 2025-03-27 00:46:59.070441 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-27 00:46:59.070513 | orchestrator | Thursday 27 March 2025 00:46:58 +0000 (0:00:00.179) 0:00:49.635 ******** 2025-03-27 00:46:59.070540 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:59.071408 | orchestrator | 2025-03-27 00:46:59.072140 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-27 00:46:59.072923 | orchestrator | Thursday 27 March 2025 00:46:59 +0000 (0:00:00.154) 0:00:49.789 ******** 2025-03-27 00:46:59.219706 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:59.220321 | orchestrator | 2025-03-27 00:46:59.221492 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-27 00:46:59.222124 | orchestrator | Thursday 27 March 2025 00:46:59 +0000 (0:00:00.148) 0:00:49.938 ******** 2025-03-27 00:46:59.381968 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:59.382645 | orchestrator | 2025-03-27 00:46:59.383681 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-27 00:46:59.385271 | orchestrator | Thursday 27 March 2025 00:46:59 +0000 (0:00:00.162) 0:00:50.101 ******** 2025-03-27 00:46:59.546715 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:59.546929 | orchestrator | 2025-03-27 00:46:59.548387 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-27 00:46:59.548625 | orchestrator | Thursday 27 March 2025 00:46:59 +0000 (0:00:00.164) 0:00:50.266 ******** 2025-03-27 00:46:59.932756 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:46:59.933808 | orchestrator | 2025-03-27 00:46:59.934114 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-27 00:46:59.937031 | orchestrator | Thursday 27 March 2025 00:46:59 +0000 (0:00:00.385) 0:00:50.652 ******** 2025-03-27 00:47:00.137488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:00.138332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:00.139816 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:00.140788 | orchestrator | 2025-03-27 00:47:00.142414 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-27 00:47:00.143278 | orchestrator | Thursday 27 March 2025 00:47:00 +0000 (0:00:00.200) 0:00:50.853 ******** 2025-03-27 00:47:00.318559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:00.320946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:00.322277 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:00.324666 | orchestrator | 2025-03-27 00:47:00.324703 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-27 00:47:00.325521 | orchestrator | Thursday 27 March 2025 00:47:00 +0000 (0:00:00.184) 0:00:51.038 ******** 2025-03-27 00:47:00.548997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:00.550006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:00.551319 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:00.552918 | orchestrator | 2025-03-27 00:47:00.553969 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-27 00:47:00.554615 | orchestrator | Thursday 27 March 2025 00:47:00 +0000 (0:00:00.230) 0:00:51.268 ******** 2025-03-27 00:47:00.747642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:00.749023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:00.749871 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:00.751352 | orchestrator | 2025-03-27 00:47:00.751795 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-27 00:47:00.752631 | orchestrator | Thursday 27 March 2025 00:47:00 +0000 (0:00:00.198) 0:00:51.467 ******** 2025-03-27 00:47:00.931062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:00.932289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:00.933824 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:00.934310 | orchestrator | 2025-03-27 00:47:00.935573 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-27 00:47:00.936587 | orchestrator | Thursday 27 March 2025 00:47:00 +0000 (0:00:00.183) 0:00:51.650 ******** 2025-03-27 00:47:01.103537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:01.105790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:01.107043 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:01.108771 | orchestrator | 2025-03-27 00:47:01.109612 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-27 00:47:01.110592 | orchestrator | Thursday 27 March 2025 00:47:01 +0000 (0:00:00.171) 0:00:51.822 ******** 2025-03-27 00:47:01.279455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:01.280663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:01.281669 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:01.283986 | orchestrator | 2025-03-27 00:47:01.284274 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-27 00:47:01.284587 | orchestrator | Thursday 27 March 2025 00:47:01 +0000 (0:00:00.175) 0:00:51.997 ******** 2025-03-27 00:47:01.449535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:01.450634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:01.451406 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:01.453314 | orchestrator | 2025-03-27 00:47:01.453584 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-27 00:47:01.454677 | orchestrator | Thursday 27 March 2025 00:47:01 +0000 (0:00:00.171) 0:00:52.169 ******** 2025-03-27 00:47:02.015483 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:47:02.015635 | orchestrator | 2025-03-27 00:47:02.017080 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-27 00:47:02.018319 | orchestrator | Thursday 27 March 2025 00:47:02 +0000 (0:00:00.565) 0:00:52.734 ******** 2025-03-27 00:47:02.879737 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:47:02.880402 | orchestrator | 2025-03-27 00:47:02.883170 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-27 00:47:03.047532 | orchestrator | Thursday 27 March 2025 00:47:02 +0000 (0:00:00.863) 0:00:53.598 ******** 2025-03-27 00:47:03.047685 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:47:03.048028 | orchestrator | 2025-03-27 00:47:03.049437 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-27 00:47:03.051069 | orchestrator | Thursday 27 March 2025 00:47:03 +0000 (0:00:00.168) 0:00:53.766 ******** 2025-03-27 00:47:03.249611 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'vg_name': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'}) 2025-03-27 00:47:03.250097 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'vg_name': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}) 2025-03-27 00:47:03.252304 | orchestrator | 2025-03-27 00:47:03.252838 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-27 00:47:03.253782 | orchestrator | Thursday 27 March 2025 00:47:03 +0000 (0:00:00.201) 0:00:53.968 ******** 2025-03-27 00:47:03.474213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:03.474379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:03.476066 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:03.477156 | orchestrator | 2025-03-27 00:47:03.477299 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-27 00:47:03.477764 | orchestrator | Thursday 27 March 2025 00:47:03 +0000 (0:00:00.225) 0:00:54.194 ******** 2025-03-27 00:47:03.654294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:03.654636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:03.656202 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:03.656942 | orchestrator | 2025-03-27 00:47:03.658089 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-27 00:47:03.658488 | orchestrator | Thursday 27 March 2025 00:47:03 +0000 (0:00:00.177) 0:00:54.371 ******** 2025-03-27 00:47:03.838241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'})  2025-03-27 00:47:03.838721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'})  2025-03-27 00:47:03.839671 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:03.841583 | orchestrator | 2025-03-27 00:47:03.842642 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-27 00:47:03.844010 | orchestrator | Thursday 27 March 2025 00:47:03 +0000 (0:00:00.186) 0:00:54.558 ******** 2025-03-27 00:47:04.781295 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 00:47:04.781876 | orchestrator |  "lvm_report": { 2025-03-27 00:47:04.782213 | orchestrator |  "lv": [ 2025-03-27 00:47:04.782615 | orchestrator |  { 2025-03-27 00:47:04.783117 | orchestrator |  "lv_name": "osd-block-bac76156-9f65-5e37-8447-16c40269f5cf", 2025-03-27 00:47:04.783589 | orchestrator |  "vg_name": "ceph-bac76156-9f65-5e37-8447-16c40269f5cf" 2025-03-27 00:47:04.784964 | orchestrator |  }, 2025-03-27 00:47:04.785752 | orchestrator |  { 2025-03-27 00:47:04.786427 | orchestrator |  "lv_name": "osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b", 2025-03-27 00:47:04.787377 | orchestrator |  "vg_name": "ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b" 2025-03-27 00:47:04.787851 | orchestrator |  } 2025-03-27 00:47:04.788845 | orchestrator |  ], 2025-03-27 00:47:04.789288 | orchestrator |  "pv": [ 2025-03-27 00:47:04.790306 | orchestrator |  { 2025-03-27 00:47:04.790852 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-27 00:47:04.791257 | orchestrator |  "vg_name": "ceph-bac76156-9f65-5e37-8447-16c40269f5cf" 2025-03-27 00:47:04.792102 | orchestrator |  }, 2025-03-27 00:47:04.793283 | orchestrator |  { 2025-03-27 00:47:04.795232 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-27 00:47:04.795486 | orchestrator |  "vg_name": "ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b" 2025-03-27 00:47:04.796110 | orchestrator |  } 2025-03-27 00:47:04.796813 | orchestrator |  ] 2025-03-27 00:47:04.798269 | orchestrator |  } 2025-03-27 00:47:04.799200 | orchestrator | } 2025-03-27 00:47:04.799833 | orchestrator | 2025-03-27 00:47:04.800542 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-27 00:47:04.801233 | orchestrator | 2025-03-27 00:47:04.801931 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-27 00:47:04.802229 | orchestrator | Thursday 27 March 2025 00:47:04 +0000 (0:00:00.939) 0:00:55.498 ******** 2025-03-27 00:47:05.044276 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-27 00:47:05.045559 | orchestrator | 2025-03-27 00:47:05.046588 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-27 00:47:05.047126 | orchestrator | Thursday 27 March 2025 00:47:05 +0000 (0:00:00.266) 0:00:55.764 ******** 2025-03-27 00:47:05.302777 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:05.303323 | orchestrator | 2025-03-27 00:47:05.305877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:05.816972 | orchestrator | Thursday 27 March 2025 00:47:05 +0000 (0:00:00.256) 0:00:56.020 ******** 2025-03-27 00:47:05.817082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-27 00:47:05.818119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-27 00:47:05.818426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-27 00:47:05.819927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-27 00:47:05.821019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-27 00:47:05.821050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-27 00:47:05.821511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-27 00:47:05.822953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-27 00:47:05.823824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-27 00:47:05.824984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-27 00:47:05.825440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-27 00:47:05.826365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-27 00:47:05.826471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-27 00:47:05.827554 | orchestrator | 2025-03-27 00:47:05.827725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:05.827755 | orchestrator | Thursday 27 March 2025 00:47:05 +0000 (0:00:00.516) 0:00:56.537 ******** 2025-03-27 00:47:06.022492 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:06.023329 | orchestrator | 2025-03-27 00:47:06.024295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:06.025406 | orchestrator | Thursday 27 March 2025 00:47:06 +0000 (0:00:00.205) 0:00:56.742 ******** 2025-03-27 00:47:06.231081 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:06.231410 | orchestrator | 2025-03-27 00:47:06.232475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:06.234336 | orchestrator | Thursday 27 March 2025 00:47:06 +0000 (0:00:00.207) 0:00:56.950 ******** 2025-03-27 00:47:06.496417 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:06.496980 | orchestrator | 2025-03-27 00:47:06.499074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:06.499148 | orchestrator | Thursday 27 March 2025 00:47:06 +0000 (0:00:00.263) 0:00:57.213 ******** 2025-03-27 00:47:06.707714 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:06.709248 | orchestrator | 2025-03-27 00:47:06.709889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:06.711209 | orchestrator | Thursday 27 March 2025 00:47:06 +0000 (0:00:00.212) 0:00:57.426 ******** 2025-03-27 00:47:07.142655 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:07.143613 | orchestrator | 2025-03-27 00:47:07.143738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:07.143809 | orchestrator | Thursday 27 March 2025 00:47:07 +0000 (0:00:00.436) 0:00:57.863 ******** 2025-03-27 00:47:07.342970 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:07.343314 | orchestrator | 2025-03-27 00:47:07.346320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:07.592921 | orchestrator | Thursday 27 March 2025 00:47:07 +0000 (0:00:00.198) 0:00:58.061 ******** 2025-03-27 00:47:07.593001 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:07.594098 | orchestrator | 2025-03-27 00:47:07.595718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:07.596944 | orchestrator | Thursday 27 March 2025 00:47:07 +0000 (0:00:00.251) 0:00:58.312 ******** 2025-03-27 00:47:07.805633 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:07.806422 | orchestrator | 2025-03-27 00:47:07.807444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:07.808315 | orchestrator | Thursday 27 March 2025 00:47:07 +0000 (0:00:00.212) 0:00:58.525 ******** 2025-03-27 00:47:08.250796 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807) 2025-03-27 00:47:08.252075 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807) 2025-03-27 00:47:08.253116 | orchestrator | 2025-03-27 00:47:08.254933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:08.256416 | orchestrator | Thursday 27 March 2025 00:47:08 +0000 (0:00:00.445) 0:00:58.970 ******** 2025-03-27 00:47:08.777425 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac) 2025-03-27 00:47:08.778424 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac) 2025-03-27 00:47:08.779815 | orchestrator | 2025-03-27 00:47:08.781388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:08.782621 | orchestrator | Thursday 27 March 2025 00:47:08 +0000 (0:00:00.526) 0:00:59.496 ******** 2025-03-27 00:47:09.248788 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d) 2025-03-27 00:47:09.250385 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d) 2025-03-27 00:47:09.252949 | orchestrator | 2025-03-27 00:47:09.253368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:09.255257 | orchestrator | Thursday 27 March 2025 00:47:09 +0000 (0:00:00.470) 0:00:59.966 ******** 2025-03-27 00:47:09.737956 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8) 2025-03-27 00:47:09.739153 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8) 2025-03-27 00:47:09.740239 | orchestrator | 2025-03-27 00:47:09.740273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-27 00:47:09.741270 | orchestrator | Thursday 27 March 2025 00:47:09 +0000 (0:00:00.488) 0:01:00.455 ******** 2025-03-27 00:47:10.110976 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-27 00:47:10.111247 | orchestrator | 2025-03-27 00:47:10.111848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:10.111881 | orchestrator | Thursday 27 March 2025 00:47:10 +0000 (0:00:00.375) 0:01:00.831 ******** 2025-03-27 00:47:10.809816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-27 00:47:10.812413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-27 00:47:10.813469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-27 00:47:10.813497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-27 00:47:10.813516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-27 00:47:10.814866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-27 00:47:10.816045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-27 00:47:10.817295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-27 00:47:10.818004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-27 00:47:10.818791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-27 00:47:10.819274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-27 00:47:10.819979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-27 00:47:10.821321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-27 00:47:10.822670 | orchestrator | 2025-03-27 00:47:10.824348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:10.824582 | orchestrator | Thursday 27 March 2025 00:47:10 +0000 (0:00:00.696) 0:01:01.527 ******** 2025-03-27 00:47:11.045422 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:11.047714 | orchestrator | 2025-03-27 00:47:11.265736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:11.265832 | orchestrator | Thursday 27 March 2025 00:47:11 +0000 (0:00:00.235) 0:01:01.762 ******** 2025-03-27 00:47:11.265861 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:11.266119 | orchestrator | 2025-03-27 00:47:11.267288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:11.267885 | orchestrator | Thursday 27 March 2025 00:47:11 +0000 (0:00:00.222) 0:01:01.985 ******** 2025-03-27 00:47:11.489209 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:11.489450 | orchestrator | 2025-03-27 00:47:11.492523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:11.700813 | orchestrator | Thursday 27 March 2025 00:47:11 +0000 (0:00:00.222) 0:01:02.207 ******** 2025-03-27 00:47:11.700861 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:11.701240 | orchestrator | 2025-03-27 00:47:11.702090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:11.702970 | orchestrator | Thursday 27 March 2025 00:47:11 +0000 (0:00:00.212) 0:01:02.420 ******** 2025-03-27 00:47:11.925599 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:11.927378 | orchestrator | 2025-03-27 00:47:11.928522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:11.928549 | orchestrator | Thursday 27 March 2025 00:47:11 +0000 (0:00:00.220) 0:01:02.641 ******** 2025-03-27 00:47:12.145532 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:12.145944 | orchestrator | 2025-03-27 00:47:12.147050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:12.149508 | orchestrator | Thursday 27 March 2025 00:47:12 +0000 (0:00:00.223) 0:01:02.864 ******** 2025-03-27 00:47:12.358684 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:12.359318 | orchestrator | 2025-03-27 00:47:12.360061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:12.361076 | orchestrator | Thursday 27 March 2025 00:47:12 +0000 (0:00:00.213) 0:01:03.078 ******** 2025-03-27 00:47:12.606013 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:12.606314 | orchestrator | 2025-03-27 00:47:12.609157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:12.609355 | orchestrator | Thursday 27 March 2025 00:47:12 +0000 (0:00:00.245) 0:01:03.324 ******** 2025-03-27 00:47:13.596446 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-27 00:47:13.597684 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-27 00:47:13.600726 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-27 00:47:13.600871 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-27 00:47:13.600897 | orchestrator | 2025-03-27 00:47:13.601778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:13.602567 | orchestrator | Thursday 27 March 2025 00:47:13 +0000 (0:00:00.990) 0:01:04.314 ******** 2025-03-27 00:47:14.286289 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:14.286738 | orchestrator | 2025-03-27 00:47:14.287512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:14.288486 | orchestrator | Thursday 27 March 2025 00:47:14 +0000 (0:00:00.691) 0:01:05.006 ******** 2025-03-27 00:47:14.515564 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:14.516257 | orchestrator | 2025-03-27 00:47:14.519835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:14.745571 | orchestrator | Thursday 27 March 2025 00:47:14 +0000 (0:00:00.228) 0:01:05.234 ******** 2025-03-27 00:47:14.745696 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:14.746945 | orchestrator | 2025-03-27 00:47:14.748092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-27 00:47:14.751137 | orchestrator | Thursday 27 March 2025 00:47:14 +0000 (0:00:00.230) 0:01:05.464 ******** 2025-03-27 00:47:14.958058 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:14.958293 | orchestrator | 2025-03-27 00:47:14.959209 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-27 00:47:14.959239 | orchestrator | Thursday 27 March 2025 00:47:14 +0000 (0:00:00.212) 0:01:05.677 ******** 2025-03-27 00:47:15.104655 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:15.104816 | orchestrator | 2025-03-27 00:47:15.105113 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-27 00:47:15.105684 | orchestrator | Thursday 27 March 2025 00:47:15 +0000 (0:00:00.147) 0:01:05.825 ******** 2025-03-27 00:47:15.325788 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '923c5540-3b69-54d6-b090-bccde0d698f1'}}) 2025-03-27 00:47:15.326735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8acd0346-cc61-560a-be8a-825f05553edd'}}) 2025-03-27 00:47:15.327440 | orchestrator | 2025-03-27 00:47:15.328298 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-27 00:47:15.329142 | orchestrator | Thursday 27 March 2025 00:47:15 +0000 (0:00:00.220) 0:01:06.046 ******** 2025-03-27 00:47:17.237692 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'}) 2025-03-27 00:47:17.240750 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'}) 2025-03-27 00:47:17.242593 | orchestrator | 2025-03-27 00:47:17.245263 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-27 00:47:17.248026 | orchestrator | Thursday 27 March 2025 00:47:17 +0000 (0:00:01.908) 0:01:07.954 ******** 2025-03-27 00:47:17.417224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:17.417400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:17.417424 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:17.417442 | orchestrator | 2025-03-27 00:47:17.417457 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-27 00:47:17.417479 | orchestrator | Thursday 27 March 2025 00:47:17 +0000 (0:00:00.179) 0:01:08.133 ******** 2025-03-27 00:47:18.792386 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'}) 2025-03-27 00:47:18.794823 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'}) 2025-03-27 00:47:18.795422 | orchestrator | 2025-03-27 00:47:18.795460 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-27 00:47:18.795483 | orchestrator | Thursday 27 March 2025 00:47:18 +0000 (0:00:01.374) 0:01:09.508 ******** 2025-03-27 00:47:19.187226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:19.187831 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:19.188592 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:19.190866 | orchestrator | 2025-03-27 00:47:19.192112 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-27 00:47:19.192144 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.398) 0:01:09.906 ******** 2025-03-27 00:47:19.340710 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:19.341588 | orchestrator | 2025-03-27 00:47:19.343318 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-27 00:47:19.344381 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.152) 0:01:10.058 ******** 2025-03-27 00:47:19.506123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:19.507386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:19.507948 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:19.508911 | orchestrator | 2025-03-27 00:47:19.509539 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-27 00:47:19.509910 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.167) 0:01:10.225 ******** 2025-03-27 00:47:19.658316 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:19.658619 | orchestrator | 2025-03-27 00:47:19.660675 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-27 00:47:19.661709 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.150) 0:01:10.376 ******** 2025-03-27 00:47:19.849944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:19.852343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:19.852777 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:19.853849 | orchestrator | 2025-03-27 00:47:19.855167 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-27 00:47:19.855830 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.193) 0:01:10.570 ******** 2025-03-27 00:47:19.998751 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:20.000038 | orchestrator | 2025-03-27 00:47:20.001317 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-27 00:47:20.002983 | orchestrator | Thursday 27 March 2025 00:47:19 +0000 (0:00:00.147) 0:01:10.717 ******** 2025-03-27 00:47:20.187698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:20.188304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:20.189479 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:20.190380 | orchestrator | 2025-03-27 00:47:20.191357 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-27 00:47:20.192120 | orchestrator | Thursday 27 March 2025 00:47:20 +0000 (0:00:00.187) 0:01:10.904 ******** 2025-03-27 00:47:20.346691 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:20.347949 | orchestrator | 2025-03-27 00:47:20.348788 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-27 00:47:20.349706 | orchestrator | Thursday 27 March 2025 00:47:20 +0000 (0:00:00.160) 0:01:11.065 ******** 2025-03-27 00:47:20.511433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:20.513299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:20.515288 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:20.515536 | orchestrator | 2025-03-27 00:47:20.516842 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-27 00:47:20.517643 | orchestrator | Thursday 27 March 2025 00:47:20 +0000 (0:00:00.164) 0:01:11.230 ******** 2025-03-27 00:47:20.708454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:20.708670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:20.709809 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:20.710322 | orchestrator | 2025-03-27 00:47:20.711282 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-27 00:47:20.713708 | orchestrator | Thursday 27 March 2025 00:47:20 +0000 (0:00:00.198) 0:01:11.428 ******** 2025-03-27 00:47:20.911542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:20.911905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:20.913236 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:20.913947 | orchestrator | 2025-03-27 00:47:20.915323 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-27 00:47:20.916769 | orchestrator | Thursday 27 March 2025 00:47:20 +0000 (0:00:00.201) 0:01:11.630 ******** 2025-03-27 00:47:21.288259 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:21.288471 | orchestrator | 2025-03-27 00:47:21.288890 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-27 00:47:21.289567 | orchestrator | Thursday 27 March 2025 00:47:21 +0000 (0:00:00.378) 0:01:12.009 ******** 2025-03-27 00:47:21.425285 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:21.425815 | orchestrator | 2025-03-27 00:47:21.426468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-27 00:47:21.427424 | orchestrator | Thursday 27 March 2025 00:47:21 +0000 (0:00:00.136) 0:01:12.145 ******** 2025-03-27 00:47:21.568837 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:21.570753 | orchestrator | 2025-03-27 00:47:21.572974 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-27 00:47:21.717639 | orchestrator | Thursday 27 March 2025 00:47:21 +0000 (0:00:00.142) 0:01:12.288 ******** 2025-03-27 00:47:21.717684 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:47:21.718473 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-27 00:47:21.718578 | orchestrator | } 2025-03-27 00:47:21.719342 | orchestrator | 2025-03-27 00:47:21.719836 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-27 00:47:21.720477 | orchestrator | Thursday 27 March 2025 00:47:21 +0000 (0:00:00.149) 0:01:12.438 ******** 2025-03-27 00:47:21.870386 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:47:21.872339 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-27 00:47:21.872908 | orchestrator | } 2025-03-27 00:47:21.875140 | orchestrator | 2025-03-27 00:47:21.875253 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-27 00:47:21.876864 | orchestrator | Thursday 27 March 2025 00:47:21 +0000 (0:00:00.152) 0:01:12.590 ******** 2025-03-27 00:47:22.039032 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:47:22.040646 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-27 00:47:22.041120 | orchestrator | } 2025-03-27 00:47:22.043984 | orchestrator | 2025-03-27 00:47:22.045366 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-27 00:47:22.045398 | orchestrator | Thursday 27 March 2025 00:47:22 +0000 (0:00:00.168) 0:01:12.759 ******** 2025-03-27 00:47:22.612060 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:22.613047 | orchestrator | 2025-03-27 00:47:22.615801 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-27 00:47:22.615948 | orchestrator | Thursday 27 March 2025 00:47:22 +0000 (0:00:00.571) 0:01:13.330 ******** 2025-03-27 00:47:23.161312 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:23.162345 | orchestrator | 2025-03-27 00:47:23.162995 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-27 00:47:23.164325 | orchestrator | Thursday 27 March 2025 00:47:23 +0000 (0:00:00.550) 0:01:13.880 ******** 2025-03-27 00:47:23.712720 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:23.713592 | orchestrator | 2025-03-27 00:47:23.714147 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-27 00:47:23.715530 | orchestrator | Thursday 27 March 2025 00:47:23 +0000 (0:00:00.549) 0:01:14.430 ******** 2025-03-27 00:47:23.877124 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:23.877471 | orchestrator | 2025-03-27 00:47:23.878126 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-27 00:47:23.878776 | orchestrator | Thursday 27 March 2025 00:47:23 +0000 (0:00:00.167) 0:01:14.597 ******** 2025-03-27 00:47:24.011161 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:24.011521 | orchestrator | 2025-03-27 00:47:24.012038 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-27 00:47:24.014888 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.131) 0:01:14.728 ******** 2025-03-27 00:47:24.342990 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:24.343550 | orchestrator | 2025-03-27 00:47:24.344744 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-27 00:47:24.345522 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.334) 0:01:15.063 ******** 2025-03-27 00:47:24.513493 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:47:24.513917 | orchestrator |  "vgs_report": { 2025-03-27 00:47:24.515133 | orchestrator |  "vg": [] 2025-03-27 00:47:24.517702 | orchestrator |  } 2025-03-27 00:47:24.517969 | orchestrator | } 2025-03-27 00:47:24.517997 | orchestrator | 2025-03-27 00:47:24.518061 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-27 00:47:24.519473 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.169) 0:01:15.232 ******** 2025-03-27 00:47:24.662793 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:24.664482 | orchestrator | 2025-03-27 00:47:24.664514 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-27 00:47:24.665373 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.149) 0:01:15.382 ******** 2025-03-27 00:47:24.819361 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:24.820219 | orchestrator | 2025-03-27 00:47:24.821801 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-27 00:47:24.822292 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.155) 0:01:15.538 ******** 2025-03-27 00:47:24.977898 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:24.979878 | orchestrator | 2025-03-27 00:47:24.982317 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-27 00:47:24.983547 | orchestrator | Thursday 27 March 2025 00:47:24 +0000 (0:00:00.158) 0:01:15.696 ******** 2025-03-27 00:47:25.154749 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.154963 | orchestrator | 2025-03-27 00:47:25.155810 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-27 00:47:25.157282 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.177) 0:01:15.874 ******** 2025-03-27 00:47:25.304923 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.306905 | orchestrator | 2025-03-27 00:47:25.307780 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-27 00:47:25.308929 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.150) 0:01:16.024 ******** 2025-03-27 00:47:25.455166 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.457221 | orchestrator | 2025-03-27 00:47:25.457665 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-27 00:47:25.458685 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.146) 0:01:16.171 ******** 2025-03-27 00:47:25.616597 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.617248 | orchestrator | 2025-03-27 00:47:25.617928 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-27 00:47:25.619156 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.164) 0:01:16.336 ******** 2025-03-27 00:47:25.764528 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.765337 | orchestrator | 2025-03-27 00:47:25.765869 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-27 00:47:25.766652 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.147) 0:01:16.484 ******** 2025-03-27 00:47:25.919955 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:25.920107 | orchestrator | 2025-03-27 00:47:25.921881 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-27 00:47:25.923163 | orchestrator | Thursday 27 March 2025 00:47:25 +0000 (0:00:00.154) 0:01:16.639 ******** 2025-03-27 00:47:26.067115 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:26.067981 | orchestrator | 2025-03-27 00:47:26.069456 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-27 00:47:26.071520 | orchestrator | Thursday 27 March 2025 00:47:26 +0000 (0:00:00.147) 0:01:16.786 ******** 2025-03-27 00:47:26.467625 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:26.468661 | orchestrator | 2025-03-27 00:47:26.468718 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-27 00:47:26.469417 | orchestrator | Thursday 27 March 2025 00:47:26 +0000 (0:00:00.398) 0:01:17.185 ******** 2025-03-27 00:47:26.615312 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:26.620556 | orchestrator | 2025-03-27 00:47:26.620937 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-27 00:47:26.620974 | orchestrator | Thursday 27 March 2025 00:47:26 +0000 (0:00:00.149) 0:01:17.334 ******** 2025-03-27 00:47:26.770279 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:26.771045 | orchestrator | 2025-03-27 00:47:26.772690 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-27 00:47:26.773144 | orchestrator | Thursday 27 March 2025 00:47:26 +0000 (0:00:00.155) 0:01:17.489 ******** 2025-03-27 00:47:26.949641 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:26.949880 | orchestrator | 2025-03-27 00:47:26.950570 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-27 00:47:26.950636 | orchestrator | Thursday 27 March 2025 00:47:26 +0000 (0:00:00.178) 0:01:17.668 ******** 2025-03-27 00:47:27.136067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:27.136583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:27.137272 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:27.138157 | orchestrator | 2025-03-27 00:47:27.138686 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-27 00:47:27.139478 | orchestrator | Thursday 27 March 2025 00:47:27 +0000 (0:00:00.187) 0:01:17.855 ******** 2025-03-27 00:47:27.320591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:27.321421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:27.321583 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:27.321612 | orchestrator | 2025-03-27 00:47:27.322290 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-27 00:47:27.326296 | orchestrator | Thursday 27 March 2025 00:47:27 +0000 (0:00:00.183) 0:01:18.039 ******** 2025-03-27 00:47:27.500467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:27.501113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:27.503906 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:27.682935 | orchestrator | 2025-03-27 00:47:27.683009 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-27 00:47:27.683026 | orchestrator | Thursday 27 March 2025 00:47:27 +0000 (0:00:00.178) 0:01:18.218 ******** 2025-03-27 00:47:27.683051 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:27.683340 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:27.684155 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:27.684718 | orchestrator | 2025-03-27 00:47:27.686123 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-27 00:47:27.686226 | orchestrator | Thursday 27 March 2025 00:47:27 +0000 (0:00:00.184) 0:01:18.402 ******** 2025-03-27 00:47:27.884867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:27.885894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:27.886766 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:27.887690 | orchestrator | 2025-03-27 00:47:27.888282 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-27 00:47:27.888885 | orchestrator | Thursday 27 March 2025 00:47:27 +0000 (0:00:00.199) 0:01:18.602 ******** 2025-03-27 00:47:28.082204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:28.082345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:28.083524 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:28.083881 | orchestrator | 2025-03-27 00:47:28.084373 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-27 00:47:28.084411 | orchestrator | Thursday 27 March 2025 00:47:28 +0000 (0:00:00.199) 0:01:18.801 ******** 2025-03-27 00:47:28.306772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:28.307276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:28.307314 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:28.308372 | orchestrator | 2025-03-27 00:47:28.308803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-27 00:47:28.309619 | orchestrator | Thursday 27 March 2025 00:47:28 +0000 (0:00:00.224) 0:01:19.026 ******** 2025-03-27 00:47:28.718300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:28.718803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:28.722135 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:28.722312 | orchestrator | 2025-03-27 00:47:28.722341 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-27 00:47:28.722869 | orchestrator | Thursday 27 March 2025 00:47:28 +0000 (0:00:00.408) 0:01:19.435 ******** 2025-03-27 00:47:29.279860 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:29.280010 | orchestrator | 2025-03-27 00:47:29.281797 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-27 00:47:29.282701 | orchestrator | Thursday 27 March 2025 00:47:29 +0000 (0:00:00.560) 0:01:19.995 ******** 2025-03-27 00:47:29.826642 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:29.826774 | orchestrator | 2025-03-27 00:47:29.829381 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-27 00:47:29.975005 | orchestrator | Thursday 27 March 2025 00:47:29 +0000 (0:00:00.548) 0:01:20.543 ******** 2025-03-27 00:47:29.975107 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:29.975160 | orchestrator | 2025-03-27 00:47:29.975864 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-27 00:47:29.976354 | orchestrator | Thursday 27 March 2025 00:47:29 +0000 (0:00:00.151) 0:01:20.695 ******** 2025-03-27 00:47:30.177050 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'vg_name': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'}) 2025-03-27 00:47:30.177785 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'vg_name': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'}) 2025-03-27 00:47:30.178608 | orchestrator | 2025-03-27 00:47:30.180066 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-27 00:47:30.181233 | orchestrator | Thursday 27 March 2025 00:47:30 +0000 (0:00:00.201) 0:01:20.896 ******** 2025-03-27 00:47:30.378449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:30.380760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:30.381946 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:30.381980 | orchestrator | 2025-03-27 00:47:30.382001 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-27 00:47:30.382085 | orchestrator | Thursday 27 March 2025 00:47:30 +0000 (0:00:00.202) 0:01:21.099 ******** 2025-03-27 00:47:30.580596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:30.580798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:30.581323 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:30.581583 | orchestrator | 2025-03-27 00:47:30.584221 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-27 00:47:30.772933 | orchestrator | Thursday 27 March 2025 00:47:30 +0000 (0:00:00.199) 0:01:21.299 ******** 2025-03-27 00:47:30.773007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'})  2025-03-27 00:47:30.775407 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'})  2025-03-27 00:47:30.776651 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:30.777049 | orchestrator | 2025-03-27 00:47:30.778295 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-27 00:47:30.778962 | orchestrator | Thursday 27 March 2025 00:47:30 +0000 (0:00:00.192) 0:01:21.491 ******** 2025-03-27 00:47:31.425111 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 00:47:31.425340 | orchestrator |  "lvm_report": { 2025-03-27 00:47:31.426405 | orchestrator |  "lv": [ 2025-03-27 00:47:31.427682 | orchestrator |  { 2025-03-27 00:47:31.428254 | orchestrator |  "lv_name": "osd-block-8acd0346-cc61-560a-be8a-825f05553edd", 2025-03-27 00:47:31.428878 | orchestrator |  "vg_name": "ceph-8acd0346-cc61-560a-be8a-825f05553edd" 2025-03-27 00:47:31.429745 | orchestrator |  }, 2025-03-27 00:47:31.430291 | orchestrator |  { 2025-03-27 00:47:31.430982 | orchestrator |  "lv_name": "osd-block-923c5540-3b69-54d6-b090-bccde0d698f1", 2025-03-27 00:47:31.431380 | orchestrator |  "vg_name": "ceph-923c5540-3b69-54d6-b090-bccde0d698f1" 2025-03-27 00:47:31.432306 | orchestrator |  } 2025-03-27 00:47:31.433516 | orchestrator |  ], 2025-03-27 00:47:31.433886 | orchestrator |  "pv": [ 2025-03-27 00:47:31.433917 | orchestrator |  { 2025-03-27 00:47:31.434326 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-27 00:47:31.434566 | orchestrator |  "vg_name": "ceph-923c5540-3b69-54d6-b090-bccde0d698f1" 2025-03-27 00:47:31.435779 | orchestrator |  }, 2025-03-27 00:47:31.437448 | orchestrator |  { 2025-03-27 00:47:31.437773 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-27 00:47:31.438657 | orchestrator |  "vg_name": "ceph-8acd0346-cc61-560a-be8a-825f05553edd" 2025-03-27 00:47:31.439432 | orchestrator |  } 2025-03-27 00:47:31.439910 | orchestrator |  ] 2025-03-27 00:47:31.440628 | orchestrator |  } 2025-03-27 00:47:31.441246 | orchestrator | } 2025-03-27 00:47:31.441440 | orchestrator | 2025-03-27 00:47:31.441769 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:47:31.442241 | orchestrator | 2025-03-27 00:47:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:47:31.443541 | orchestrator | 2025-03-27 00:47:31 | INFO  | Please wait and do not abort execution. 2025-03-27 00:47:31.443572 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-27 00:47:31.443759 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-27 00:47:31.443788 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-27 00:47:31.444226 | orchestrator | 2025-03-27 00:47:31.445294 | orchestrator | 2025-03-27 00:47:31.445533 | orchestrator | 2025-03-27 00:47:31.445933 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:47:31.447354 | orchestrator | Thursday 27 March 2025 00:47:31 +0000 (0:00:00.653) 0:01:22.145 ******** 2025-03-27 00:47:31.447965 | orchestrator | =============================================================================== 2025-03-27 00:47:31.448442 | orchestrator | Create block VGs -------------------------------------------------------- 6.32s 2025-03-27 00:47:31.449160 | orchestrator | Create block LVs -------------------------------------------------------- 4.39s 2025-03-27 00:47:31.449937 | orchestrator | Print LVM report data --------------------------------------------------- 2.35s 2025-03-27 00:47:31.451871 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.11s 2025-03-27 00:47:31.452279 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.97s 2025-03-27 00:47:31.452305 | orchestrator | Add known links to the list of available block devices ------------------ 1.84s 2025-03-27 00:47:31.452324 | orchestrator | Add known partitions to the list of available block devices ------------- 1.72s 2025-03-27 00:47:31.452829 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.70s 2025-03-27 00:47:31.454088 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.68s 2025-03-27 00:47:31.456126 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.68s 2025-03-27 00:47:31.459010 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.00s 2025-03-27 00:47:31.459137 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-03-27 00:47:31.460865 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.85s 2025-03-27 00:47:31.461598 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-03-27 00:47:31.463355 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.77s 2025-03-27 00:47:31.463919 | orchestrator | Print 'Create DB LVs for ceph_db_wal_devices' --------------------------- 0.77s 2025-03-27 00:47:31.464914 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2025-03-27 00:47:31.465560 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.76s 2025-03-27 00:47:31.466376 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.74s 2025-03-27 00:47:31.467263 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.72s 2025-03-27 00:47:33.562509 | orchestrator | 2025-03-27 00:47:33 | INFO  | Task 6201a5a8-1322-4654-af74-0c5df5de92c7 (facts) was prepared for execution. 2025-03-27 00:47:36.937072 | orchestrator | 2025-03-27 00:47:33 | INFO  | It takes a moment until task 6201a5a8-1322-4654-af74-0c5df5de92c7 (facts) has been started and output is visible here. 2025-03-27 00:47:36.937172 | orchestrator | 2025-03-27 00:47:36.938716 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-27 00:47:36.939014 | orchestrator | 2025-03-27 00:47:36.941336 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-27 00:47:36.941498 | orchestrator | Thursday 27 March 2025 00:47:36 +0000 (0:00:00.223) 0:00:00.223 ******** 2025-03-27 00:47:38.028746 | orchestrator | ok: [testbed-manager] 2025-03-27 00:47:38.028925 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:47:38.029891 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:47:38.030408 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:47:38.031109 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:47:38.031844 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:47:38.032353 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:38.032458 | orchestrator | 2025-03-27 00:47:38.032903 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-27 00:47:38.036363 | orchestrator | Thursday 27 March 2025 00:47:38 +0000 (0:00:01.095) 0:00:01.319 ******** 2025-03-27 00:47:38.206890 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:47:38.299265 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:47:38.384300 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:47:38.474445 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:47:38.567452 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:47:39.349216 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:39.349794 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:39.353308 | orchestrator | 2025-03-27 00:47:39.354490 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-27 00:47:39.354939 | orchestrator | 2025-03-27 00:47:39.355825 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-27 00:47:39.356949 | orchestrator | Thursday 27 March 2025 00:47:39 +0000 (0:00:01.322) 0:00:02.641 ******** 2025-03-27 00:47:44.197560 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:47:44.198704 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:47:44.200451 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:47:44.201336 | orchestrator | ok: [testbed-manager] 2025-03-27 00:47:44.203681 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:47:44.204545 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:47:44.205210 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:47:44.206501 | orchestrator | 2025-03-27 00:47:44.208148 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-27 00:47:44.209030 | orchestrator | 2025-03-27 00:47:44.209061 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-27 00:47:44.209809 | orchestrator | Thursday 27 March 2025 00:47:44 +0000 (0:00:04.850) 0:00:07.492 ******** 2025-03-27 00:47:44.555274 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:47:44.645534 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:47:44.726897 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:47:44.824171 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:47:44.907516 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:47:44.945517 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:47:44.946740 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:47:44.947724 | orchestrator | 2025-03-27 00:47:44.948652 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:47:44.949504 | orchestrator | 2025-03-27 00:47:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-27 00:47:44.951222 | orchestrator | 2025-03-27 00:47:44 | INFO  | Please wait and do not abort execution. 2025-03-27 00:47:44.951258 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.952452 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.952792 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.954376 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.955337 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.956741 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.958234 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:47:44.959518 | orchestrator | 2025-03-27 00:47:44.959953 | orchestrator | Thursday 27 March 2025 00:47:44 +0000 (0:00:00.748) 0:00:08.240 ******** 2025-03-27 00:47:44.961389 | orchestrator | =============================================================================== 2025-03-27 00:47:44.962560 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-03-27 00:47:44.963270 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-03-27 00:47:44.964239 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-03-27 00:47:44.965287 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2025-03-27 00:47:45.626899 | orchestrator | 2025-03-27 00:47:45.629949 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Mar 27 00:47:45 UTC 2025 2025-03-27 00:47:45.630067 | orchestrator | 2025-03-27 00:47:47.138238 | orchestrator | 2025-03-27 00:47:47 | INFO  | Collection nutshell is prepared for execution 2025-03-27 00:47:47.142357 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [0] - dotfiles 2025-03-27 00:47:47.142409 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [0] - homer 2025-03-27 00:47:47.143748 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [0] - netdata 2025-03-27 00:47:47.143776 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [0] - openstackclient 2025-03-27 00:47:47.143792 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [0] - phpmyadmin 2025-03-27 00:47:47.143806 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [0] - common 2025-03-27 00:47:47.143827 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [1] -- loadbalancer 2025-03-27 00:47:47.144004 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [2] --- opensearch 2025-03-27 00:47:47.144030 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [2] --- mariadb-ng 2025-03-27 00:47:47.144050 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [3] ---- horizon 2025-03-27 00:47:47.144269 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [3] ---- keystone 2025-03-27 00:47:47.144295 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [4] ----- neutron 2025-03-27 00:47:47.144309 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ wait-for-nova 2025-03-27 00:47:47.144324 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [5] ------ octavia 2025-03-27 00:47:47.144345 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- barbican 2025-03-27 00:47:47.144588 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- designate 2025-03-27 00:47:47.144616 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- ironic 2025-03-27 00:47:47.144631 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- placement 2025-03-27 00:47:47.144678 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- magnum 2025-03-27 00:47:47.144744 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [1] -- openvswitch 2025-03-27 00:47:47.144838 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [2] --- ovn 2025-03-27 00:47:47.144859 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [1] -- memcached 2025-03-27 00:47:47.144942 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [1] -- redis 2025-03-27 00:47:47.144962 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [1] -- rabbitmq-ng 2025-03-27 00:47:47.144982 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [0] - kubernetes 2025-03-27 00:47:47.145138 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [1] -- kubeconfig 2025-03-27 00:47:47.145585 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [1] -- copy-kubeconfig 2025-03-27 00:47:47.145614 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [0] - ceph 2025-03-27 00:47:47.147006 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [1] -- ceph-pools 2025-03-27 00:47:47.335633 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [2] --- copy-ceph-keys 2025-03-27 00:47:47.335666 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [3] ---- cephclient 2025-03-27 00:47:47.335681 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-27 00:47:47.335695 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [4] ----- wait-for-keystone 2025-03-27 00:47:47.335708 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-27 00:47:47.335748 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ glance 2025-03-27 00:47:47.335763 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ cinder 2025-03-27 00:47:47.335776 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ nova 2025-03-27 00:47:47.335789 | orchestrator | 2025-03-27 00:47:47 | INFO  | A [4] ----- prometheus 2025-03-27 00:47:47.335803 | orchestrator | 2025-03-27 00:47:47 | INFO  | D [5] ------ grafana 2025-03-27 00:47:47.335823 | orchestrator | 2025-03-27 00:47:47 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-27 00:47:49.346310 | orchestrator | 2025-03-27 00:47:47 | INFO  | Tasks are running in the background 2025-03-27 00:47:49.346438 | orchestrator | 2025-03-27 00:47:49 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-27 00:47:51.447556 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:47:51.449599 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:47:51.449633 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:47:51.451698 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:47:51.452171 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:47:51.452909 | orchestrator | 2025-03-27 00:47:51 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:47:54.513146 | orchestrator | 2025-03-27 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:47:54.513340 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:47:54.513439 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:47:54.513793 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:47:54.514352 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:47:54.515125 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:47:54.515508 | orchestrator | 2025-03-27 00:47:54 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:47:57.577069 | orchestrator | 2025-03-27 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:47:57.577223 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:47:57.579670 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:47:57.582608 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:47:57.585546 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:00.674438 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:00.674575 | orchestrator | 2025-03-27 00:47:57 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:00.674631 | orchestrator | 2025-03-27 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:00.674665 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:48:00.679356 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:00.680893 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:00.682258 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:00.685559 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:00.686636 | orchestrator | 2025-03-27 00:48:00 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:03.739790 | orchestrator | 2025-03-27 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:03.739920 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:48:03.752974 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:03.758336 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:03.758402 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:03.765563 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:03.769152 | orchestrator | 2025-03-27 00:48:03 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:06.858305 | orchestrator | 2025-03-27 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:06.858399 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:48:06.858934 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:06.863402 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:06.865222 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:06.865268 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:06.865276 | orchestrator | 2025-03-27 00:48:06 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:06.865283 | orchestrator | 2025-03-27 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:09.950438 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state STARTED 2025-03-27 00:48:09.953105 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:09.954956 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:09.955682 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:09.957291 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:09.958773 | orchestrator | 2025-03-27 00:48:09 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:13.019648 | orchestrator | 2025-03-27 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:13.019790 | orchestrator | 2025-03-27 00:48:13.019811 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-27 00:48:13.019847 | orchestrator | 2025-03-27 00:48:13.019863 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-27 00:48:13.019877 | orchestrator | Thursday 27 March 2025 00:47:55 +0000 (0:00:00.601) 0:00:00.601 ******** 2025-03-27 00:48:13.019891 | orchestrator | changed: [testbed-manager] 2025-03-27 00:48:13.019906 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:48:13.019920 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:48:13.019934 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:48:13.019947 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:48:13.019961 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:48:13.019975 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:48:13.019988 | orchestrator | 2025-03-27 00:48:13.020002 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-27 00:48:13.020023 | orchestrator | Thursday 27 March 2025 00:47:59 +0000 (0:00:04.101) 0:00:04.703 ******** 2025-03-27 00:48:13.020038 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-27 00:48:13.020052 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-27 00:48:13.020072 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-27 00:48:13.020086 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-27 00:48:13.020100 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-27 00:48:13.020114 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-27 00:48:13.020127 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-27 00:48:13.020141 | orchestrator | 2025-03-27 00:48:13.020155 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-27 00:48:13.020168 | orchestrator | Thursday 27 March 2025 00:48:02 +0000 (0:00:02.632) 0:00:07.336 ******** 2025-03-27 00:48:13.020213 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:00.964814', 'end': '2025-03-27 00:48:01.972994', 'delta': '0:00:01.008180', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020239 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:01.033144', 'end': '2025-03-27 00:48:01.041944', 'delta': '0:00:00.008800', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020257 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:00.993545', 'end': '2025-03-27 00:48:01.001839', 'delta': '0:00:00.008294', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020304 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:01.182807', 'end': '2025-03-27 00:48:01.191472', 'delta': '0:00:00.008665', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020322 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:01.500205', 'end': '2025-03-27 00:48:01.511293', 'delta': '0:00:00.011088', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020339 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:01.754738', 'end': '2025-03-27 00:48:01.763644', 'delta': '0:00:00.008906', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020360 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-27 00:48:02.036645', 'end': '2025-03-27 00:48:02.046422', 'delta': '0:00:00.009777', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-27 00:48:13.020377 | orchestrator | 2025-03-27 00:48:13.020393 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-27 00:48:13.020409 | orchestrator | Thursday 27 March 2025 00:48:05 +0000 (0:00:03.465) 0:00:10.802 ******** 2025-03-27 00:48:13.020424 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-27 00:48:13.020448 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-27 00:48:13.020464 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-27 00:48:13.020480 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-27 00:48:13.020496 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-27 00:48:13.020512 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-27 00:48:13.020528 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-27 00:48:13.020543 | orchestrator | 2025-03-27 00:48:13.020557 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:48:13.020572 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.020588 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.020602 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.020622 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.024041 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.024077 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.024091 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:48:13.024105 | orchestrator | 2025-03-27 00:48:13.024119 | orchestrator | Thursday 27 March 2025 00:48:09 +0000 (0:00:03.621) 0:00:14.423 ******** 2025-03-27 00:48:13.024133 | orchestrator | =============================================================================== 2025-03-27 00:48:13.024147 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.10s 2025-03-27 00:48:13.024161 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.62s 2025-03-27 00:48:13.024175 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.47s 2025-03-27 00:48:13.024212 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.63s 2025-03-27 00:48:13.024233 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task ec7e5ad3-4c9b-4195-a897-fbd8f066147b is in state SUCCESS 2025-03-27 00:48:16.081636 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:16.081756 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:16.081776 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:16.081791 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:16.081805 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:16.081819 | orchestrator | 2025-03-27 00:48:13 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:16.081834 | orchestrator | 2025-03-27 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:16.081867 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:16.086424 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:16.086495 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:16.086904 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:16.090347 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:16.098013 | orchestrator | 2025-03-27 00:48:16 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:19.238562 | orchestrator | 2025-03-27 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:19.238694 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:19.242549 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:19.242588 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:19.246171 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:19.251340 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:19.259159 | orchestrator | 2025-03-27 00:48:19 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:22.347771 | orchestrator | 2025-03-27 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:22.347894 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:22.348969 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:22.356516 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:22.367566 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:22.369651 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:22.370816 | orchestrator | 2025-03-27 00:48:22 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:25.426403 | orchestrator | 2025-03-27 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:25.426511 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:25.428904 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:25.444476 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:28.508346 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:28.508449 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:28.508466 | orchestrator | 2025-03-27 00:48:25 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:28.508481 | orchestrator | 2025-03-27 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:28.508512 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:31.568058 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:31.568252 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:31.568274 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:31.568289 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:31.568303 | orchestrator | 2025-03-27 00:48:28 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:31.568318 | orchestrator | 2025-03-27 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:31.568348 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:31.568712 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:31.572999 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:31.582461 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:34.647698 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:34.647812 | orchestrator | 2025-03-27 00:48:31 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:34.647830 | orchestrator | 2025-03-27 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:34.647860 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:34.653236 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:34.656177 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:34.656329 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:34.656356 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state STARTED 2025-03-27 00:48:34.660329 | orchestrator | 2025-03-27 00:48:34 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:37.745001 | orchestrator | 2025-03-27 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:37.745137 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:37.746126 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:37.754557 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:40.817024 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:40.817105 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task 0f911509-aa60-40d9-b251-0ece1812d38b is in state SUCCESS 2025-03-27 00:48:40.817114 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:40.817122 | orchestrator | 2025-03-27 00:48:37 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:40.817130 | orchestrator | 2025-03-27 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:40.817148 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:40.820060 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:40.827107 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:40.831409 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:40.831448 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:40.834979 | orchestrator | 2025-03-27 00:48:40 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:43.893712 | orchestrator | 2025-03-27 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:43.893851 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:43.896614 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:43.902948 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:43.904546 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:43.905783 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:43.906610 | orchestrator | 2025-03-27 00:48:43 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:46.972577 | orchestrator | 2025-03-27 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:46.972706 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:46.974526 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:46.974562 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:46.976915 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:50.062374 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:50.062490 | orchestrator | 2025-03-27 00:48:46 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:50.062508 | orchestrator | 2025-03-27 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:50.062541 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:50.088101 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:50.088152 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:50.088174 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:53.135258 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:53.135373 | orchestrator | 2025-03-27 00:48:50 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:53.135391 | orchestrator | 2025-03-27 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:53.135424 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:56.211279 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:56.211433 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:56.211453 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:56.211468 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:56.211482 | orchestrator | 2025-03-27 00:48:53 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:56.211496 | orchestrator | 2025-03-27 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:56.211527 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:56.221666 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:56.234721 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:56.255789 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:56.259120 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:56.268064 | orchestrator | 2025-03-27 00:48:56 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:48:59.347674 | orchestrator | 2025-03-27 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:48:59.347813 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:48:59.353282 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:48:59.353982 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:48:59.355326 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:48:59.356855 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:48:59.360630 | orchestrator | 2025-03-27 00:48:59 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:02.426183 | orchestrator | 2025-03-27 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:02.426341 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state STARTED 2025-03-27 00:49:02.427023 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:02.429060 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:02.430393 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:02.432390 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:02.433554 | orchestrator | 2025-03-27 00:49:02 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:02.433630 | orchestrator | 2025-03-27 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:05.501336 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task c4da73b7-9bcc-4774-baa2-d4a04046ed75 is in state SUCCESS 2025-03-27 00:49:05.504931 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:05.505015 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:05.505768 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:05.528027 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:05.541816 | orchestrator | 2025-03-27 00:49:05 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:08.578772 | orchestrator | 2025-03-27 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:08.578904 | orchestrator | 2025-03-27 00:49:08 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:08.582953 | orchestrator | 2025-03-27 00:49:08 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:08.583338 | orchestrator | 2025-03-27 00:49:08 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:08.583371 | orchestrator | 2025-03-27 00:49:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:08.584409 | orchestrator | 2025-03-27 00:49:08 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:08.584705 | orchestrator | 2025-03-27 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:11.650382 | orchestrator | 2025-03-27 00:49:11 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:11.650567 | orchestrator | 2025-03-27 00:49:11 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:11.650589 | orchestrator | 2025-03-27 00:49:11 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:11.650599 | orchestrator | 2025-03-27 00:49:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:11.650613 | orchestrator | 2025-03-27 00:49:11 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:14.714487 | orchestrator | 2025-03-27 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:14.714619 | orchestrator | 2025-03-27 00:49:14 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:17.777859 | orchestrator | 2025-03-27 00:49:14 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:17.777980 | orchestrator | 2025-03-27 00:49:14 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:17.777998 | orchestrator | 2025-03-27 00:49:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:17.778012 | orchestrator | 2025-03-27 00:49:14 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state STARTED 2025-03-27 00:49:17.778070 | orchestrator | 2025-03-27 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:17.778102 | orchestrator | 2025-03-27 00:49:17 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:17.779872 | orchestrator | 2025-03-27 00:49:17 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:17.785894 | orchestrator | 2025-03-27 00:49:17 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:17.788350 | orchestrator | 2025-03-27 00:49:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:17.792382 | orchestrator | 2025-03-27 00:49:17 | INFO  | Task 057fb973-fa4f-4f8f-943b-a88eb814f179 is in state SUCCESS 2025-03-27 00:49:17.793574 | orchestrator | 2025-03-27 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:17.793624 | orchestrator | 2025-03-27 00:49:17.793639 | orchestrator | 2025-03-27 00:49:17.793651 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-27 00:49:17.793664 | orchestrator | 2025-03-27 00:49:17.793677 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-27 00:49:17.793689 | orchestrator | Thursday 27 March 2025 00:47:57 +0000 (0:00:00.515) 0:00:00.515 ******** 2025-03-27 00:49:17.793702 | orchestrator | ok: [testbed-manager] => { 2025-03-27 00:49:17.793716 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-27 00:49:17.793730 | orchestrator | } 2025-03-27 00:49:17.793743 | orchestrator | 2025-03-27 00:49:17.793755 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-27 00:49:17.793767 | orchestrator | Thursday 27 March 2025 00:47:57 +0000 (0:00:00.243) 0:00:00.759 ******** 2025-03-27 00:49:17.793779 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.793792 | orchestrator | 2025-03-27 00:49:17.793804 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-27 00:49:17.793816 | orchestrator | Thursday 27 March 2025 00:47:59 +0000 (0:00:01.617) 0:00:02.377 ******** 2025-03-27 00:49:17.793828 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-27 00:49:17.793840 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-27 00:49:17.793852 | orchestrator | 2025-03-27 00:49:17.793864 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-27 00:49:17.793877 | orchestrator | Thursday 27 March 2025 00:48:01 +0000 (0:00:01.721) 0:00:04.098 ******** 2025-03-27 00:49:17.793888 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.793901 | orchestrator | 2025-03-27 00:49:17.793913 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-27 00:49:17.793925 | orchestrator | Thursday 27 March 2025 00:48:05 +0000 (0:00:04.351) 0:00:08.450 ******** 2025-03-27 00:49:17.793937 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.793949 | orchestrator | 2025-03-27 00:49:17.793961 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-27 00:49:17.793973 | orchestrator | Thursday 27 March 2025 00:48:07 +0000 (0:00:02.461) 0:00:10.912 ******** 2025-03-27 00:49:17.793985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-27 00:49:17.793997 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.794010 | orchestrator | 2025-03-27 00:49:17.794082 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-27 00:49:17.794095 | orchestrator | Thursday 27 March 2025 00:48:33 +0000 (0:00:25.735) 0:00:36.647 ******** 2025-03-27 00:49:17.794107 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794120 | orchestrator | 2025-03-27 00:49:17.794132 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:49:17.794145 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.794159 | orchestrator | 2025-03-27 00:49:17.794171 | orchestrator | Thursday 27 March 2025 00:48:35 +0000 (0:00:02.277) 0:00:38.924 ******** 2025-03-27 00:49:17.794183 | orchestrator | =============================================================================== 2025-03-27 00:49:17.794195 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.73s 2025-03-27 00:49:17.794226 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.35s 2025-03-27 00:49:17.794239 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.47s 2025-03-27 00:49:17.794257 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.28s 2025-03-27 00:49:17.794270 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.72s 2025-03-27 00:49:17.794294 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.62s 2025-03-27 00:49:17.794307 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.24s 2025-03-27 00:49:17.794319 | orchestrator | 2025-03-27 00:49:17.794331 | orchestrator | 2025-03-27 00:49:17.794343 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-27 00:49:17.794355 | orchestrator | 2025-03-27 00:49:17.794368 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-27 00:49:17.794380 | orchestrator | Thursday 27 March 2025 00:47:57 +0000 (0:00:00.864) 0:00:00.864 ******** 2025-03-27 00:49:17.794392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-27 00:49:17.794406 | orchestrator | 2025-03-27 00:49:17.794418 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-27 00:49:17.794431 | orchestrator | Thursday 27 March 2025 00:47:57 +0000 (0:00:00.434) 0:00:01.299 ******** 2025-03-27 00:49:17.794443 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-27 00:49:17.794456 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-27 00:49:17.794468 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-27 00:49:17.794480 | orchestrator | 2025-03-27 00:49:17.794492 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-27 00:49:17.794504 | orchestrator | Thursday 27 March 2025 00:47:59 +0000 (0:00:01.769) 0:00:03.068 ******** 2025-03-27 00:49:17.794516 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794529 | orchestrator | 2025-03-27 00:49:17.794541 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-27 00:49:17.794553 | orchestrator | Thursday 27 March 2025 00:48:01 +0000 (0:00:02.457) 0:00:05.526 ******** 2025-03-27 00:49:17.794565 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-27 00:49:17.794578 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.794590 | orchestrator | 2025-03-27 00:49:17.794613 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-27 00:49:17.794626 | orchestrator | Thursday 27 March 2025 00:48:51 +0000 (0:00:49.884) 0:00:55.410 ******** 2025-03-27 00:49:17.794638 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794650 | orchestrator | 2025-03-27 00:49:17.794662 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-27 00:49:17.794675 | orchestrator | Thursday 27 March 2025 00:48:54 +0000 (0:00:02.481) 0:00:57.891 ******** 2025-03-27 00:49:17.794687 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.794699 | orchestrator | 2025-03-27 00:49:17.794711 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-27 00:49:17.794723 | orchestrator | Thursday 27 March 2025 00:48:55 +0000 (0:00:01.631) 0:00:59.523 ******** 2025-03-27 00:49:17.794735 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794747 | orchestrator | 2025-03-27 00:49:17.794760 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-27 00:49:17.794772 | orchestrator | Thursday 27 March 2025 00:48:58 +0000 (0:00:02.625) 0:01:02.149 ******** 2025-03-27 00:49:17.794784 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794796 | orchestrator | 2025-03-27 00:49:17.794808 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-27 00:49:17.794820 | orchestrator | Thursday 27 March 2025 00:49:00 +0000 (0:00:01.756) 0:01:03.906 ******** 2025-03-27 00:49:17.794833 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.794845 | orchestrator | 2025-03-27 00:49:17.794857 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-27 00:49:17.794869 | orchestrator | Thursday 27 March 2025 00:49:01 +0000 (0:00:01.515) 0:01:05.424 ******** 2025-03-27 00:49:17.794881 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.794900 | orchestrator | 2025-03-27 00:49:17.794912 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:49:17.794924 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.794937 | orchestrator | 2025-03-27 00:49:17.794949 | orchestrator | Thursday 27 March 2025 00:49:02 +0000 (0:00:00.602) 0:01:06.027 ******** 2025-03-27 00:49:17.794961 | orchestrator | =============================================================================== 2025-03-27 00:49:17.794973 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 49.88s 2025-03-27 00:49:17.794985 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.63s 2025-03-27 00:49:17.794998 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.48s 2025-03-27 00:49:17.795014 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.46s 2025-03-27 00:49:17.795027 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.77s 2025-03-27 00:49:17.795039 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.76s 2025-03-27 00:49:17.795051 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.63s 2025-03-27 00:49:17.795063 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.52s 2025-03-27 00:49:17.795075 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.60s 2025-03-27 00:49:17.795088 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.43s 2025-03-27 00:49:17.795100 | orchestrator | 2025-03-27 00:49:17.795112 | orchestrator | 2025-03-27 00:49:17.795124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:49:17.795136 | orchestrator | 2025-03-27 00:49:17.795148 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:49:17.795160 | orchestrator | Thursday 27 March 2025 00:47:55 +0000 (0:00:00.530) 0:00:00.530 ******** 2025-03-27 00:49:17.795173 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-27 00:49:17.795185 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-27 00:49:17.795197 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-27 00:49:17.795223 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-27 00:49:17.795236 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-27 00:49:17.795248 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-27 00:49:17.795261 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-27 00:49:17.795273 | orchestrator | 2025-03-27 00:49:17.795285 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-27 00:49:17.795297 | orchestrator | 2025-03-27 00:49:17.795309 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-27 00:49:17.795321 | orchestrator | Thursday 27 March 2025 00:47:58 +0000 (0:00:02.494) 0:00:03.025 ******** 2025-03-27 00:49:17.795346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:49:17.795362 | orchestrator | 2025-03-27 00:49:17.795374 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-27 00:49:17.795386 | orchestrator | Thursday 27 March 2025 00:48:01 +0000 (0:00:03.643) 0:00:06.668 ******** 2025-03-27 00:49:17.795398 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:49:17.795410 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.795423 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:49:17.795435 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:49:17.795447 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:49:17.795459 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:49:17.795470 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:49:17.795488 | orchestrator | 2025-03-27 00:49:17.795501 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-27 00:49:17.795519 | orchestrator | Thursday 27 March 2025 00:48:05 +0000 (0:00:03.571) 0:00:10.240 ******** 2025-03-27 00:49:17.795532 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:49:17.795544 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.795556 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:49:17.795568 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:49:17.795580 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:49:17.795592 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:49:17.795604 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:49:17.795621 | orchestrator | 2025-03-27 00:49:17.795634 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-27 00:49:17.795646 | orchestrator | Thursday 27 March 2025 00:48:10 +0000 (0:00:04.876) 0:00:15.116 ******** 2025-03-27 00:49:17.795658 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.795670 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:49:17.795682 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:49:17.795695 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:49:17.795707 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:49:17.795719 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:49:17.795731 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:49:17.795743 | orchestrator | 2025-03-27 00:49:17.795755 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-27 00:49:17.795767 | orchestrator | Thursday 27 March 2025 00:48:12 +0000 (0:00:02.285) 0:00:17.402 ******** 2025-03-27 00:49:17.795779 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:49:17.795791 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:49:17.795803 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:49:17.795815 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:49:17.795827 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:49:17.795839 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.795851 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:49:17.795863 | orchestrator | 2025-03-27 00:49:17.795875 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-27 00:49:17.795887 | orchestrator | Thursday 27 March 2025 00:48:24 +0000 (0:00:12.003) 0:00:29.406 ******** 2025-03-27 00:49:17.795899 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:49:17.795911 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:49:17.795923 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:49:17.795935 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:49:17.795947 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:49:17.795959 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:49:17.795971 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.795983 | orchestrator | 2025-03-27 00:49:17.795996 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-27 00:49:17.796008 | orchestrator | Thursday 27 March 2025 00:48:44 +0000 (0:00:19.546) 0:00:48.952 ******** 2025-03-27 00:49:17.796021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:49:17.796038 | orchestrator | 2025-03-27 00:49:17.796050 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-27 00:49:17.796062 | orchestrator | Thursday 27 March 2025 00:48:47 +0000 (0:00:03.321) 0:00:52.273 ******** 2025-03-27 00:49:17.796074 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-27 00:49:17.796087 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-27 00:49:17.796099 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-27 00:49:17.796112 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-27 00:49:17.796124 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-27 00:49:17.796136 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-27 00:49:17.796157 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-27 00:49:17.796169 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-27 00:49:17.796181 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-27 00:49:17.796193 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-27 00:49:17.796206 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-27 00:49:17.796274 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-27 00:49:17.796288 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-27 00:49:17.796300 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-27 00:49:17.796312 | orchestrator | 2025-03-27 00:49:17.796324 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-27 00:49:17.796337 | orchestrator | Thursday 27 March 2025 00:48:56 +0000 (0:00:08.917) 0:01:01.191 ******** 2025-03-27 00:49:17.796350 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.796362 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:49:17.796375 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:49:17.796387 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:49:17.796399 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:49:17.796411 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:49:17.796423 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:49:17.796435 | orchestrator | 2025-03-27 00:49:17.796448 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-27 00:49:17.796460 | orchestrator | Thursday 27 March 2025 00:49:00 +0000 (0:00:03.815) 0:01:05.006 ******** 2025-03-27 00:49:17.796472 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:49:17.796484 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.796497 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:49:17.796509 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:49:17.796521 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:49:17.796533 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:49:17.796545 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:49:17.796557 | orchestrator | 2025-03-27 00:49:17.796570 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-27 00:49:17.796587 | orchestrator | Thursday 27 March 2025 00:49:03 +0000 (0:00:03.177) 0:01:08.184 ******** 2025-03-27 00:49:17.796599 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.796612 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:49:17.796624 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:49:17.796636 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:49:17.796654 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:49:17.796667 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:49:17.796679 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:49:17.796691 | orchestrator | 2025-03-27 00:49:17.796704 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-27 00:49:17.796716 | orchestrator | Thursday 27 March 2025 00:49:05 +0000 (0:00:01.669) 0:01:09.853 ******** 2025-03-27 00:49:17.796728 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:49:17.796741 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:49:17.796753 | orchestrator | ok: [testbed-manager] 2025-03-27 00:49:17.796765 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:49:17.796777 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:49:17.796789 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:49:17.796801 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:49:17.796814 | orchestrator | 2025-03-27 00:49:17.796826 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-27 00:49:17.796838 | orchestrator | Thursday 27 March 2025 00:49:07 +0000 (0:00:02.802) 0:01:12.655 ******** 2025-03-27 00:49:17.796850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-27 00:49:17.796864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:49:17.796884 | orchestrator | 2025-03-27 00:49:17.796897 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-27 00:49:17.796909 | orchestrator | Thursday 27 March 2025 00:49:09 +0000 (0:00:01.175) 0:01:13.831 ******** 2025-03-27 00:49:17.796921 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.796933 | orchestrator | 2025-03-27 00:49:17.796946 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-27 00:49:17.796958 | orchestrator | Thursday 27 March 2025 00:49:11 +0000 (0:00:02.261) 0:01:16.092 ******** 2025-03-27 00:49:17.796970 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:49:17.796983 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:49:17.797002 | orchestrator | changed: [testbed-manager] 2025-03-27 00:49:17.797016 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:49:17.797029 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:49:17.797041 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:49:17.797053 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:49:17.797066 | orchestrator | 2025-03-27 00:49:17.797078 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:49:17.797090 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797103 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797115 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797132 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797145 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797158 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797170 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:49:17.797182 | orchestrator | 2025-03-27 00:49:17.797195 | orchestrator | Thursday 27 March 2025 00:49:14 +0000 (0:00:03.465) 0:01:19.557 ******** 2025-03-27 00:49:17.797222 | orchestrator | =============================================================================== 2025-03-27 00:49:17.797235 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.55s 2025-03-27 00:49:17.797248 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.00s 2025-03-27 00:49:17.797260 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.92s 2025-03-27 00:49:17.797272 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.88s 2025-03-27 00:49:17.797284 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 3.82s 2025-03-27 00:49:17.797297 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.64s 2025-03-27 00:49:17.797309 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.57s 2025-03-27 00:49:17.797321 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.47s 2025-03-27 00:49:17.797333 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.32s 2025-03-27 00:49:17.797345 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.18s 2025-03-27 00:49:17.797358 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.80s 2025-03-27 00:49:17.797370 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.49s 2025-03-27 00:49:17.797388 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.29s 2025-03-27 00:49:17.797400 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.26s 2025-03-27 00:49:17.797417 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.67s 2025-03-27 00:49:20.857332 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.18s 2025-03-27 00:49:20.857482 | orchestrator | 2025-03-27 00:49:20 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:20.859364 | orchestrator | 2025-03-27 00:49:20 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:20.864979 | orchestrator | 2025-03-27 00:49:20 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:20.868206 | orchestrator | 2025-03-27 00:49:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:23.915877 | orchestrator | 2025-03-27 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:23.916011 | orchestrator | 2025-03-27 00:49:23 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state STARTED 2025-03-27 00:49:23.916090 | orchestrator | 2025-03-27 00:49:23 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:23.916114 | orchestrator | 2025-03-27 00:49:23 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:23.918763 | orchestrator | 2025-03-27 00:49:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:23.918830 | orchestrator | 2025-03-27 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:27.011194 | orchestrator | 2025-03-27 00:49:27 | INFO  | Task b2c44015-7769-49d0-9dad-e9ce96b50233 is in state SUCCESS 2025-03-27 00:49:27.014385 | orchestrator | 2025-03-27 00:49:27 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:27.015299 | orchestrator | 2025-03-27 00:49:27 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:27.019702 | orchestrator | 2025-03-27 00:49:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:30.049936 | orchestrator | 2025-03-27 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:30.050125 | orchestrator | 2025-03-27 00:49:30 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:30.050782 | orchestrator | 2025-03-27 00:49:30 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:30.051683 | orchestrator | 2025-03-27 00:49:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:30.051860 | orchestrator | 2025-03-27 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:33.109320 | orchestrator | 2025-03-27 00:49:33 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:33.111543 | orchestrator | 2025-03-27 00:49:33 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:36.155412 | orchestrator | 2025-03-27 00:49:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:36.155529 | orchestrator | 2025-03-27 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:36.155562 | orchestrator | 2025-03-27 00:49:36 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:36.155639 | orchestrator | 2025-03-27 00:49:36 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:36.156575 | orchestrator | 2025-03-27 00:49:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:39.208393 | orchestrator | 2025-03-27 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:39.208540 | orchestrator | 2025-03-27 00:49:39 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:39.211374 | orchestrator | 2025-03-27 00:49:39 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:39.211431 | orchestrator | 2025-03-27 00:49:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:42.253325 | orchestrator | 2025-03-27 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:42.253462 | orchestrator | 2025-03-27 00:49:42 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:42.254201 | orchestrator | 2025-03-27 00:49:42 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:42.255599 | orchestrator | 2025-03-27 00:49:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:42.255668 | orchestrator | 2025-03-27 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:45.307183 | orchestrator | 2025-03-27 00:49:45 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:45.309957 | orchestrator | 2025-03-27 00:49:45 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:45.313041 | orchestrator | 2025-03-27 00:49:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:48.357180 | orchestrator | 2025-03-27 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:48.357378 | orchestrator | 2025-03-27 00:49:48 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:48.365665 | orchestrator | 2025-03-27 00:49:48 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:48.367173 | orchestrator | 2025-03-27 00:49:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:51.424909 | orchestrator | 2025-03-27 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:51.425054 | orchestrator | 2025-03-27 00:49:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:51.427348 | orchestrator | 2025-03-27 00:49:51 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:51.427967 | orchestrator | 2025-03-27 00:49:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:51.428172 | orchestrator | 2025-03-27 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:54.473806 | orchestrator | 2025-03-27 00:49:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:54.474209 | orchestrator | 2025-03-27 00:49:54 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:54.475026 | orchestrator | 2025-03-27 00:49:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:49:57.538682 | orchestrator | 2025-03-27 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:49:57.538828 | orchestrator | 2025-03-27 00:49:57 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:49:57.540923 | orchestrator | 2025-03-27 00:49:57 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:49:57.543340 | orchestrator | 2025-03-27 00:49:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:00.587009 | orchestrator | 2025-03-27 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:00.587140 | orchestrator | 2025-03-27 00:50:00 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:00.587523 | orchestrator | 2025-03-27 00:50:00 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:00.591566 | orchestrator | 2025-03-27 00:50:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:03.639341 | orchestrator | 2025-03-27 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:03.639485 | orchestrator | 2025-03-27 00:50:03 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:03.642426 | orchestrator | 2025-03-27 00:50:03 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:03.648007 | orchestrator | 2025-03-27 00:50:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:06.691281 | orchestrator | 2025-03-27 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:06.691425 | orchestrator | 2025-03-27 00:50:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:06.694453 | orchestrator | 2025-03-27 00:50:06 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:09.747484 | orchestrator | 2025-03-27 00:50:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:09.747599 | orchestrator | 2025-03-27 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:09.747633 | orchestrator | 2025-03-27 00:50:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:09.748799 | orchestrator | 2025-03-27 00:50:09 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:09.750419 | orchestrator | 2025-03-27 00:50:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:12.799060 | orchestrator | 2025-03-27 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:12.799202 | orchestrator | 2025-03-27 00:50:12 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:12.800819 | orchestrator | 2025-03-27 00:50:12 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:12.802232 | orchestrator | 2025-03-27 00:50:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:12.802862 | orchestrator | 2025-03-27 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:15.855433 | orchestrator | 2025-03-27 00:50:15 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:15.855618 | orchestrator | 2025-03-27 00:50:15 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:15.857073 | orchestrator | 2025-03-27 00:50:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:18.945437 | orchestrator | 2025-03-27 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:18.945570 | orchestrator | 2025-03-27 00:50:18 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:18.948950 | orchestrator | 2025-03-27 00:50:18 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:18.952683 | orchestrator | 2025-03-27 00:50:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:21.999895 | orchestrator | 2025-03-27 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:22.000065 | orchestrator | 2025-03-27 00:50:21 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:22.002101 | orchestrator | 2025-03-27 00:50:21 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:22.002140 | orchestrator | 2025-03-27 00:50:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:25.060987 | orchestrator | 2025-03-27 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:25.061114 | orchestrator | 2025-03-27 00:50:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:25.062358 | orchestrator | 2025-03-27 00:50:25 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:25.064481 | orchestrator | 2025-03-27 00:50:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:28.114210 | orchestrator | 2025-03-27 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:28.114407 | orchestrator | 2025-03-27 00:50:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:28.114496 | orchestrator | 2025-03-27 00:50:28 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state STARTED 2025-03-27 00:50:28.115150 | orchestrator | 2025-03-27 00:50:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:31.168826 | orchestrator | 2025-03-27 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:31.168979 | orchestrator | 2025-03-27 00:50:31 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:31.171197 | orchestrator | 2025-03-27 00:50:31 | INFO  | Task 4b3d2f79-59ed-499e-a2e9-63db11067638 is in state SUCCESS 2025-03-27 00:50:31.173130 | orchestrator | 2025-03-27 00:50:31.173187 | orchestrator | 2025-03-27 00:50:31.173213 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-27 00:50:31.173239 | orchestrator | 2025-03-27 00:50:31.173294 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-27 00:50:31.173317 | orchestrator | Thursday 27 March 2025 00:48:15 +0000 (0:00:00.217) 0:00:00.217 ******** 2025-03-27 00:50:31.173342 | orchestrator | ok: [testbed-manager] 2025-03-27 00:50:31.173360 | orchestrator | 2025-03-27 00:50:31.173374 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-27 00:50:31.173389 | orchestrator | Thursday 27 March 2025 00:48:16 +0000 (0:00:01.540) 0:00:01.757 ******** 2025-03-27 00:50:31.173403 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-27 00:50:31.173425 | orchestrator | 2025-03-27 00:50:31.173439 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-27 00:50:31.173453 | orchestrator | Thursday 27 March 2025 00:48:17 +0000 (0:00:00.951) 0:00:02.710 ******** 2025-03-27 00:50:31.173467 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.173481 | orchestrator | 2025-03-27 00:50:31.173494 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-27 00:50:31.173508 | orchestrator | Thursday 27 March 2025 00:48:20 +0000 (0:00:03.054) 0:00:05.764 ******** 2025-03-27 00:50:31.173522 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-27 00:50:31.173536 | orchestrator | ok: [testbed-manager] 2025-03-27 00:50:31.173550 | orchestrator | 2025-03-27 00:50:31.173564 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-27 00:50:31.173578 | orchestrator | Thursday 27 March 2025 00:49:21 +0000 (0:01:00.883) 0:01:06.647 ******** 2025-03-27 00:50:31.173591 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.173605 | orchestrator | 2025-03-27 00:50:31.173619 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:50:31.173633 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:50:31.173669 | orchestrator | 2025-03-27 00:50:31.173683 | orchestrator | Thursday 27 March 2025 00:49:25 +0000 (0:00:03.538) 0:01:10.186 ******** 2025-03-27 00:50:31.173698 | orchestrator | =============================================================================== 2025-03-27 00:50:31.173713 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.88s 2025-03-27 00:50:31.173729 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.54s 2025-03-27 00:50:31.173744 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.06s 2025-03-27 00:50:31.173760 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.54s 2025-03-27 00:50:31.173776 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.95s 2025-03-27 00:50:31.173792 | orchestrator | 2025-03-27 00:50:31.173807 | orchestrator | 2025-03-27 00:50:31.173822 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-27 00:50:31.173837 | orchestrator | 2025-03-27 00:50:31.173853 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-27 00:50:31.173868 | orchestrator | Thursday 27 March 2025 00:47:50 +0000 (0:00:00.420) 0:00:00.420 ******** 2025-03-27 00:50:31.173882 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:50:31.173897 | orchestrator | 2025-03-27 00:50:31.173911 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-27 00:50:31.173924 | orchestrator | Thursday 27 March 2025 00:47:52 +0000 (0:00:01.851) 0:00:02.272 ******** 2025-03-27 00:50:31.173938 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.173951 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.173965 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.173978 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.173992 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174005 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174105 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.174123 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174137 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174151 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174165 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.174179 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.174192 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174206 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-27 00:50:31.174220 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174234 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174271 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174300 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-27 00:50:31.174314 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174328 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174351 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-27 00:50:31.174365 | orchestrator | 2025-03-27 00:50:31.174378 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-27 00:50:31.174392 | orchestrator | Thursday 27 March 2025 00:47:58 +0000 (0:00:05.467) 0:00:07.740 ******** 2025-03-27 00:50:31.174405 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:50:31.174426 | orchestrator | 2025-03-27 00:50:31.174441 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-27 00:50:31.174455 | orchestrator | Thursday 27 March 2025 00:48:01 +0000 (0:00:02.777) 0:00:10.517 ******** 2025-03-27 00:50:31.174474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174536 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.174593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174673 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174784 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.174823 | orchestrator | 2025-03-27 00:50:31.174837 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-27 00:50:31.174851 | orchestrator | Thursday 27 March 2025 00:48:07 +0000 (0:00:06.438) 0:00:16.955 ******** 2025-03-27 00:50:31.174871 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.174886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.174905 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.174920 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.174935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.174949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.174964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.174984 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.174998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175103 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.175117 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.175131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175180 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.175199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175271 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.175287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175336 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.175350 | orchestrator | 2025-03-27 00:50:31.175364 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-27 00:50:31.175378 | orchestrator | Thursday 27 March 2025 00:48:09 +0000 (0:00:02.141) 0:00:19.097 ******** 2025-03-27 00:50:31.175392 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175413 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175428 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175442 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.175456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175527 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.175541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175619 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.175633 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.175647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175702 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.175716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175766 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.175780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-27 00:50:31.175794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.175829 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.175843 | orchestrator | 2025-03-27 00:50:31.175857 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-27 00:50:31.175870 | orchestrator | Thursday 27 March 2025 00:48:12 +0000 (0:00:02.554) 0:00:21.652 ******** 2025-03-27 00:50:31.175884 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.175898 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.175911 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.175925 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.175939 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.175952 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.175966 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.175979 | orchestrator | 2025-03-27 00:50:31.175993 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-27 00:50:31.176007 | orchestrator | Thursday 27 March 2025 00:48:13 +0000 (0:00:01.316) 0:00:22.968 ******** 2025-03-27 00:50:31.176021 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.176034 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.176048 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.176061 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.176075 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.176089 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.176102 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.176116 | orchestrator | 2025-03-27 00:50:31.176129 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-27 00:50:31.176143 | orchestrator | Thursday 27 March 2025 00:48:14 +0000 (0:00:01.048) 0:00:24.017 ******** 2025-03-27 00:50:31.176157 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:50:31.176170 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.176184 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.176197 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.176211 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.176224 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.176238 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.176269 | orchestrator | 2025-03-27 00:50:31.176283 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-27 00:50:31.176297 | orchestrator | Thursday 27 March 2025 00:48:55 +0000 (0:00:40.787) 0:01:04.804 ******** 2025-03-27 00:50:31.176311 | orchestrator | ok: [testbed-manager] 2025-03-27 00:50:31.176330 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:50:31.176344 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:50:31.176358 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:50:31.176371 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:50:31.176385 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:50:31.176403 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:50:31.176417 | orchestrator | 2025-03-27 00:50:31.176431 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-27 00:50:31.176445 | orchestrator | Thursday 27 March 2025 00:48:59 +0000 (0:00:04.542) 0:01:09.347 ******** 2025-03-27 00:50:31.176459 | orchestrator | ok: [testbed-manager] 2025-03-27 00:50:31.176479 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:50:31.176493 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:50:31.176506 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:50:31.176520 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:50:31.176533 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:50:31.176546 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:50:31.176560 | orchestrator | 2025-03-27 00:50:31.176573 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-27 00:50:31.176587 | orchestrator | Thursday 27 March 2025 00:49:02 +0000 (0:00:02.251) 0:01:11.599 ******** 2025-03-27 00:50:31.176601 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.176615 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.176629 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.176642 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.176656 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.176669 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.176683 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.176696 | orchestrator | 2025-03-27 00:50:31.176710 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-27 00:50:31.176723 | orchestrator | Thursday 27 March 2025 00:49:03 +0000 (0:00:01.166) 0:01:12.765 ******** 2025-03-27 00:50:31.176737 | orchestrator | skipping: [testbed-manager] 2025-03-27 00:50:31.176750 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:50:31.176764 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:50:31.176777 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:50:31.176791 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:50:31.176804 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:50:31.176817 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:50:31.176831 | orchestrator | 2025-03-27 00:50:31.176845 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-27 00:50:31.176858 | orchestrator | Thursday 27 March 2025 00:49:04 +0000 (0:00:00.859) 0:01:13.624 ******** 2025-03-27 00:50:31.176872 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.176887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.176906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.176921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.176948 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.176964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.176978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.176992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.177039 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.177090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.177357 | orchestrator | 2025-03-27 00:50:31.177377 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-27 00:50:31.177391 | orchestrator | Thursday 27 March 2025 00:49:09 +0000 (0:00:05.000) 0:01:18.625 ******** 2025-03-27 00:50:31.177406 | orchestrator | [WARNING]: Skipped 2025-03-27 00:50:31.177419 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-27 00:50:31.177433 | orchestrator | to this access issue: 2025-03-27 00:50:31.177447 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-27 00:50:31.177460 | orchestrator | directory 2025-03-27 00:50:31.177474 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:50:31.177487 | orchestrator | 2025-03-27 00:50:31.177501 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-27 00:50:31.177515 | orchestrator | Thursday 27 March 2025 00:49:10 +0000 (0:00:01.234) 0:01:19.860 ******** 2025-03-27 00:50:31.177528 | orchestrator | [WARNING]: Skipped 2025-03-27 00:50:31.177542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-27 00:50:31.177555 | orchestrator | to this access issue: 2025-03-27 00:50:31.177569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-27 00:50:31.177583 | orchestrator | directory 2025-03-27 00:50:31.177596 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:50:31.177610 | orchestrator | 2025-03-27 00:50:31.177624 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-27 00:50:31.177637 | orchestrator | Thursday 27 March 2025 00:49:11 +0000 (0:00:00.981) 0:01:20.842 ******** 2025-03-27 00:50:31.177650 | orchestrator | [WARNING]: Skipped 2025-03-27 00:50:31.177664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-27 00:50:31.177677 | orchestrator | to this access issue: 2025-03-27 00:50:31.177691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-27 00:50:31.177704 | orchestrator | directory 2025-03-27 00:50:31.177718 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:50:31.177731 | orchestrator | 2025-03-27 00:50:31.177744 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-27 00:50:31.177758 | orchestrator | Thursday 27 March 2025 00:49:12 +0000 (0:00:00.740) 0:01:21.582 ******** 2025-03-27 00:50:31.177771 | orchestrator | [WARNING]: Skipped 2025-03-27 00:50:31.177795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-27 00:50:31.177809 | orchestrator | to this access issue: 2025-03-27 00:50:31.177823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-27 00:50:31.177836 | orchestrator | directory 2025-03-27 00:50:31.177850 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 00:50:31.177864 | orchestrator | 2025-03-27 00:50:31.177877 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-27 00:50:31.177891 | orchestrator | Thursday 27 March 2025 00:49:12 +0000 (0:00:00.582) 0:01:22.165 ******** 2025-03-27 00:50:31.177904 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.177918 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.177931 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.177945 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.177959 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.177972 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.177985 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.177999 | orchestrator | 2025-03-27 00:50:31.178012 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-27 00:50:31.178060 | orchestrator | Thursday 27 March 2025 00:49:18 +0000 (0:00:05.881) 0:01:28.047 ******** 2025-03-27 00:50:31.178074 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178116 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178129 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178143 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178157 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-27 00:50:31.178170 | orchestrator | 2025-03-27 00:50:31.178184 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-27 00:50:31.178197 | orchestrator | Thursday 27 March 2025 00:49:22 +0000 (0:00:04.095) 0:01:32.142 ******** 2025-03-27 00:50:31.178211 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.178225 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.178238 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.178302 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.178316 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.178338 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.178352 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.178366 | orchestrator | 2025-03-27 00:50:31.178379 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-27 00:50:31.178393 | orchestrator | Thursday 27 March 2025 00:49:25 +0000 (0:00:02.907) 0:01:35.049 ******** 2025-03-27 00:50:31.178408 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178429 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178482 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178501 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178539 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178576 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178610 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178639 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178653 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178674 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178709 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.178728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:50:31.178746 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178761 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.178775 | orchestrator | 2025-03-27 00:50:31.178789 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-27 00:50:31.178803 | orchestrator | Thursday 27 March 2025 00:49:28 +0000 (0:00:03.162) 0:01:38.212 ******** 2025-03-27 00:50:31.178816 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178853 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178865 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178877 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178889 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-27 00:50:31.178902 | orchestrator | 2025-03-27 00:50:31.178914 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-27 00:50:31.178936 | orchestrator | Thursday 27 March 2025 00:49:31 +0000 (0:00:02.658) 0:01:40.871 ******** 2025-03-27 00:50:31.178954 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.178966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.178978 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.178991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.179002 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.179015 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.179027 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-27 00:50:31.179039 | orchestrator | 2025-03-27 00:50:31.179051 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-27 00:50:31.179063 | orchestrator | Thursday 27 March 2025 00:49:35 +0000 (0:00:03.754) 0:01:44.625 ******** 2025-03-27 00:50:31.179075 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179157 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179236 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-27 00:50:31.179407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:50:31.179502 | orchestrator | 2025-03-27 00:50:31.179514 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-27 00:50:31.179527 | orchestrator | Thursday 27 March 2025 00:49:39 +0000 (0:00:04.381) 0:01:49.007 ******** 2025-03-27 00:50:31.179539 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.179557 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.179568 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.179578 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.179588 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.179597 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.179607 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.179622 | orchestrator | 2025-03-27 00:50:31.179632 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-27 00:50:31.179642 | orchestrator | Thursday 27 March 2025 00:49:41 +0000 (0:00:01.908) 0:01:50.916 ******** 2025-03-27 00:50:31.179652 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.179663 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.179672 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.179682 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.179692 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.179702 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.179712 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.179722 | orchestrator | 2025-03-27 00:50:31.179732 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179742 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:01.581) 0:01:52.497 ******** 2025-03-27 00:50:31.179752 | orchestrator | 2025-03-27 00:50:31.179762 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179772 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.068) 0:01:52.566 ******** 2025-03-27 00:50:31.179782 | orchestrator | 2025-03-27 00:50:31.179792 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179802 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.062) 0:01:52.628 ******** 2025-03-27 00:50:31.179812 | orchestrator | 2025-03-27 00:50:31.179822 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179832 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.053) 0:01:52.682 ******** 2025-03-27 00:50:31.179841 | orchestrator | 2025-03-27 00:50:31.179852 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179861 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.269) 0:01:52.952 ******** 2025-03-27 00:50:31.179871 | orchestrator | 2025-03-27 00:50:31.179881 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179891 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.059) 0:01:53.011 ******** 2025-03-27 00:50:31.179901 | orchestrator | 2025-03-27 00:50:31.179911 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-27 00:50:31.179921 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.057) 0:01:53.069 ******** 2025-03-27 00:50:31.179931 | orchestrator | 2025-03-27 00:50:31.179941 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-27 00:50:31.179951 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.072) 0:01:53.142 ******** 2025-03-27 00:50:31.179965 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.179976 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.179986 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.179996 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.180005 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.180015 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.180025 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.180035 | orchestrator | 2025-03-27 00:50:31.180045 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-27 00:50:31.180055 | orchestrator | Thursday 27 March 2025 00:49:52 +0000 (0:00:08.547) 0:02:01.690 ******** 2025-03-27 00:50:31.180065 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.180075 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.180085 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.180095 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.180104 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.180114 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.180124 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.180134 | orchestrator | 2025-03-27 00:50:31.180144 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-27 00:50:31.180154 | orchestrator | Thursday 27 March 2025 00:50:17 +0000 (0:00:25.482) 0:02:27.173 ******** 2025-03-27 00:50:31.180164 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:50:31.180174 | orchestrator | ok: [testbed-manager] 2025-03-27 00:50:31.180184 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:50:31.180194 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:50:31.180204 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:50:31.180214 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:50:31.180223 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:50:31.180233 | orchestrator | 2025-03-27 00:50:31.180256 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-27 00:50:31.180267 | orchestrator | Thursday 27 March 2025 00:50:20 +0000 (0:00:02.736) 0:02:29.909 ******** 2025-03-27 00:50:31.180277 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:50:31.180287 | orchestrator | changed: [testbed-manager] 2025-03-27 00:50:31.180297 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:50:31.180307 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:50:31.180317 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:50:31.180327 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:50:31.180337 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:50:31.180347 | orchestrator | 2025-03-27 00:50:31.180357 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:50:31.180368 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:31.180379 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:31.180390 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:31.180405 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:34.225386 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:34.225512 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:34.225531 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 00:50:34.225546 | orchestrator | 2025-03-27 00:50:34.225561 | orchestrator | 2025-03-27 00:50:34.225604 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:50:34.225620 | orchestrator | Thursday 27 March 2025 00:50:30 +0000 (0:00:10.149) 0:02:40.059 ******** 2025-03-27 00:50:34.225634 | orchestrator | =============================================================================== 2025-03-27 00:50:34.225648 | orchestrator | common : Ensure fluentd image is present for label check --------------- 40.79s 2025-03-27 00:50:34.225662 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.48s 2025-03-27 00:50:34.225690 | orchestrator | common : Restart cron container ---------------------------------------- 10.15s 2025-03-27 00:50:34.225705 | orchestrator | common : Restart fluentd container -------------------------------------- 8.55s 2025-03-27 00:50:34.225719 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.44s 2025-03-27 00:50:34.225733 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.88s 2025-03-27 00:50:34.225746 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.47s 2025-03-27 00:50:34.225760 | orchestrator | common : Copying over config.json files for services -------------------- 5.00s 2025-03-27 00:50:34.225773 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.54s 2025-03-27 00:50:34.225787 | orchestrator | common : Check common containers ---------------------------------------- 4.38s 2025-03-27 00:50:34.225800 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.10s 2025-03-27 00:50:34.225814 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.75s 2025-03-27 00:50:34.225829 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.16s 2025-03-27 00:50:34.225843 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.91s 2025-03-27 00:50:34.225857 | orchestrator | common : include_tasks -------------------------------------------------- 2.78s 2025-03-27 00:50:34.225870 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.74s 2025-03-27 00:50:34.225884 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.66s 2025-03-27 00:50:34.225897 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.55s 2025-03-27 00:50:34.225911 | orchestrator | common : Set fluentd facts ---------------------------------------------- 2.25s 2025-03-27 00:50:34.225924 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.14s 2025-03-27 00:50:34.225939 | orchestrator | 2025-03-27 00:50:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:34.225953 | orchestrator | 2025-03-27 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:34.225983 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:34.228674 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:34.228710 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:34.231563 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:34.236389 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:34.242356 | orchestrator | 2025-03-27 00:50:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:37.288375 | orchestrator | 2025-03-27 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:37.288502 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:37.289826 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:37.290376 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:37.291144 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:37.291933 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:37.294597 | orchestrator | 2025-03-27 00:50:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:40.360327 | orchestrator | 2025-03-27 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:40.360448 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:40.363783 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:40.366121 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:40.368424 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:43.418299 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:43.418423 | orchestrator | 2025-03-27 00:50:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:43.418442 | orchestrator | 2025-03-27 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:43.418475 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:43.419185 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:43.421711 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:43.424758 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:43.424796 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:43.429831 | orchestrator | 2025-03-27 00:50:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:46.493007 | orchestrator | 2025-03-27 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:46.493153 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:46.493349 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:46.495644 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:46.496306 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:46.497159 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:46.498948 | orchestrator | 2025-03-27 00:50:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:49.548326 | orchestrator | 2025-03-27 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:49.548469 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:49.550216 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:49.551902 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:49.554327 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:49.556232 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:49.558164 | orchestrator | 2025-03-27 00:50:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:52.597955 | orchestrator | 2025-03-27 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:52.598132 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:52.599024 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:52.600670 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:52.601521 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:52.603315 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:52.604516 | orchestrator | 2025-03-27 00:50:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:52.604578 | orchestrator | 2025-03-27 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:55.643029 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:55.643886 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:55.645073 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:55.646663 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:55.650388 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state STARTED 2025-03-27 00:50:55.652044 | orchestrator | 2025-03-27 00:50:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:50:58.697595 | orchestrator | 2025-03-27 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:50:58.697725 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:50:58.698405 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:50:58.699157 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:50:58.702366 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:50:58.702957 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:50:58.703737 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task 337b409b-43b5-4303-8825-a36e7a1d125c is in state SUCCESS 2025-03-27 00:50:58.704698 | orchestrator | 2025-03-27 00:50:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:01.755301 | orchestrator | 2025-03-27 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:01.755434 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:51:01.759380 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:01.764891 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:01.765926 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:01.769000 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:01.770113 | orchestrator | 2025-03-27 00:51:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:01.770844 | orchestrator | 2025-03-27 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:04.812601 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:51:04.812972 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:04.813817 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:04.814519 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:04.815389 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:04.822005 | orchestrator | 2025-03-27 00:51:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:07.857919 | orchestrator | 2025-03-27 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:07.858111 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state STARTED 2025-03-27 00:51:07.858678 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:07.858715 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:07.859711 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:07.860388 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:07.861139 | orchestrator | 2025-03-27 00:51:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:10.901803 | orchestrator | 2025-03-27 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:10.901936 | orchestrator | 2025-03-27 00:51:10.901956 | orchestrator | 2025-03-27 00:51:10.901971 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:51:10.901986 | orchestrator | 2025-03-27 00:51:10.902000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:51:10.902068 | orchestrator | Thursday 27 March 2025 00:50:36 +0000 (0:00:00.376) 0:00:00.376 ******** 2025-03-27 00:51:10.902086 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:51:10.902101 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:51:10.902115 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:51:10.902129 | orchestrator | 2025-03-27 00:51:10.902143 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:51:10.902157 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.561) 0:00:00.937 ******** 2025-03-27 00:51:10.902172 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-27 00:51:10.902186 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-27 00:51:10.902200 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-27 00:51:10.902214 | orchestrator | 2025-03-27 00:51:10.902228 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-27 00:51:10.902310 | orchestrator | 2025-03-27 00:51:10.902325 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-27 00:51:10.902341 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.356) 0:00:01.294 ******** 2025-03-27 00:51:10.902356 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:51:10.902374 | orchestrator | 2025-03-27 00:51:10.902389 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-27 00:51:10.902404 | orchestrator | Thursday 27 March 2025 00:50:38 +0000 (0:00:01.111) 0:00:02.405 ******** 2025-03-27 00:51:10.902420 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-27 00:51:10.902436 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-27 00:51:10.902452 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-27 00:51:10.902467 | orchestrator | 2025-03-27 00:51:10.902482 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-27 00:51:10.902498 | orchestrator | Thursday 27 March 2025 00:50:40 +0000 (0:00:01.604) 0:00:04.010 ******** 2025-03-27 00:51:10.902514 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-27 00:51:10.902529 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-27 00:51:10.902545 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-27 00:51:10.902560 | orchestrator | 2025-03-27 00:51:10.902576 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-27 00:51:10.902591 | orchestrator | Thursday 27 March 2025 00:50:43 +0000 (0:00:03.027) 0:00:07.037 ******** 2025-03-27 00:51:10.902607 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:51:10.902637 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:51:10.902653 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:51:10.902669 | orchestrator | 2025-03-27 00:51:10.902689 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-27 00:51:10.902704 | orchestrator | Thursday 27 March 2025 00:50:46 +0000 (0:00:03.332) 0:00:10.370 ******** 2025-03-27 00:51:10.902717 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:51:10.902731 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:51:10.902745 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:51:10.902758 | orchestrator | 2025-03-27 00:51:10.902772 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:51:10.902786 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.902801 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.902816 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.902829 | orchestrator | 2025-03-27 00:51:10.902843 | orchestrator | 2025-03-27 00:51:10.902856 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:51:10.902870 | orchestrator | Thursday 27 March 2025 00:50:55 +0000 (0:00:08.248) 0:00:18.618 ******** 2025-03-27 00:51:10.902883 | orchestrator | =============================================================================== 2025-03-27 00:51:10.902897 | orchestrator | memcached : Restart memcached container --------------------------------- 8.25s 2025-03-27 00:51:10.902910 | orchestrator | memcached : Check memcached container ----------------------------------- 3.33s 2025-03-27 00:51:10.902924 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.03s 2025-03-27 00:51:10.902937 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.60s 2025-03-27 00:51:10.902951 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.11s 2025-03-27 00:51:10.902964 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-03-27 00:51:10.902987 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-03-27 00:51:10.903000 | orchestrator | 2025-03-27 00:51:10.903014 | orchestrator | 2025-03-27 00:51:10.903027 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:51:10.903041 | orchestrator | 2025-03-27 00:51:10.903055 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:51:10.903068 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.449) 0:00:00.449 ******** 2025-03-27 00:51:10.903082 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:51:10.903096 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:51:10.903109 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:51:10.903123 | orchestrator | 2025-03-27 00:51:10.903137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:51:10.903162 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.563) 0:00:01.013 ******** 2025-03-27 00:51:10.903177 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-27 00:51:10.903191 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-27 00:51:10.903205 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-27 00:51:10.903218 | orchestrator | 2025-03-27 00:51:10.903232 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-27 00:51:10.903246 | orchestrator | 2025-03-27 00:51:10.903259 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-27 00:51:10.903340 | orchestrator | Thursday 27 March 2025 00:50:38 +0000 (0:00:00.489) 0:00:01.502 ******** 2025-03-27 00:51:10.903355 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:51:10.903369 | orchestrator | 2025-03-27 00:51:10.903383 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-27 00:51:10.903397 | orchestrator | Thursday 27 March 2025 00:50:39 +0000 (0:00:01.302) 0:00:02.805 ******** 2025-03-27 00:51:10.903413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903533 | orchestrator | 2025-03-27 00:51:10.903547 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-27 00:51:10.903561 | orchestrator | Thursday 27 March 2025 00:50:42 +0000 (0:00:02.626) 0:00:05.432 ******** 2025-03-27 00:51:10.903576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903690 | orchestrator | 2025-03-27 00:51:10.903704 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-27 00:51:10.903718 | orchestrator | Thursday 27 March 2025 00:50:46 +0000 (0:00:04.021) 0:00:09.454 ******** 2025-03-27 00:51:10.903732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903832 | orchestrator | 2025-03-27 00:51:10.903847 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-27 00:51:10.903861 | orchestrator | Thursday 27 March 2025 00:50:51 +0000 (0:00:04.827) 0:00:14.281 ******** 2025-03-27 00:51:10.903875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.903958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-27 00:51:10.904049 | orchestrator | 2025-03-27 00:51:10.904067 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-27 00:51:10.904081 | orchestrator | Thursday 27 March 2025 00:50:53 +0000 (0:00:02.269) 0:00:16.550 ******** 2025-03-27 00:51:10.904095 | orchestrator | 2025-03-27 00:51:10.904109 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-27 00:51:10.904123 | orchestrator | Thursday 27 March 2025 00:50:53 +0000 (0:00:00.177) 0:00:16.728 ******** 2025-03-27 00:51:10.904137 | orchestrator | 2025-03-27 00:51:10.904150 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-27 00:51:10.904164 | orchestrator | Thursday 27 March 2025 00:50:53 +0000 (0:00:00.071) 0:00:16.800 ******** 2025-03-27 00:51:10.904178 | orchestrator | 2025-03-27 00:51:10.904192 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-27 00:51:10.904206 | orchestrator | Thursday 27 March 2025 00:50:53 +0000 (0:00:00.236) 0:00:17.036 ******** 2025-03-27 00:51:10.904220 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:51:10.904234 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:51:10.904247 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:51:10.904295 | orchestrator | 2025-03-27 00:51:10.904311 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-27 00:51:10.904325 | orchestrator | Thursday 27 March 2025 00:51:02 +0000 (0:00:08.245) 0:00:25.282 ******** 2025-03-27 00:51:10.904339 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:51:10.904359 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:51:10.904374 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:51:10.904387 | orchestrator | 2025-03-27 00:51:10.904401 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:51:10.904423 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.904437 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.904451 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:51:10.904465 | orchestrator | 2025-03-27 00:51:10.904479 | orchestrator | 2025-03-27 00:51:10.904492 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:51:10.904506 | orchestrator | Thursday 27 March 2025 00:51:08 +0000 (0:00:06.749) 0:00:32.031 ******** 2025-03-27 00:51:10.904520 | orchestrator | =============================================================================== 2025-03-27 00:51:10.904533 | orchestrator | redis : Restart redis container ----------------------------------------- 8.25s 2025-03-27 00:51:10.904547 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.75s 2025-03-27 00:51:10.904560 | orchestrator | redis : Copying over redis config files --------------------------------- 4.83s 2025-03-27 00:51:10.904574 | orchestrator | redis : Copying over default config.json files -------------------------- 4.02s 2025-03-27 00:51:10.904587 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.63s 2025-03-27 00:51:10.904601 | orchestrator | redis : Check redis containers ------------------------------------------ 2.27s 2025-03-27 00:51:10.904614 | orchestrator | redis : include_tasks --------------------------------------------------- 1.30s 2025-03-27 00:51:10.904628 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-03-27 00:51:10.904642 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-03-27 00:51:10.904655 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.49s 2025-03-27 00:51:10.904670 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task c97acb67-cb80-4f85-ab0b-323b74a030e0 is in state SUCCESS 2025-03-27 00:51:10.904696 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:10.906337 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:10.907423 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:10.908318 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:10.909031 | orchestrator | 2025-03-27 00:51:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:13.960314 | orchestrator | 2025-03-27 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:13.960460 | orchestrator | 2025-03-27 00:51:13 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:13.961795 | orchestrator | 2025-03-27 00:51:13 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:13.962657 | orchestrator | 2025-03-27 00:51:13 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:13.962692 | orchestrator | 2025-03-27 00:51:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:13.963560 | orchestrator | 2025-03-27 00:51:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:13.963638 | orchestrator | 2025-03-27 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:16.996069 | orchestrator | 2025-03-27 00:51:16 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:16.996497 | orchestrator | 2025-03-27 00:51:16 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:16.997427 | orchestrator | 2025-03-27 00:51:16 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:16.998207 | orchestrator | 2025-03-27 00:51:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:16.998989 | orchestrator | 2025-03-27 00:51:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:16.999186 | orchestrator | 2025-03-27 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:20.053558 | orchestrator | 2025-03-27 00:51:20 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:20.054907 | orchestrator | 2025-03-27 00:51:20 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:20.056508 | orchestrator | 2025-03-27 00:51:20 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:20.058296 | orchestrator | 2025-03-27 00:51:20 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:20.059602 | orchestrator | 2025-03-27 00:51:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:20.059809 | orchestrator | 2025-03-27 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:23.113584 | orchestrator | 2025-03-27 00:51:23 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:23.114967 | orchestrator | 2025-03-27 00:51:23 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:23.116547 | orchestrator | 2025-03-27 00:51:23 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:23.118589 | orchestrator | 2025-03-27 00:51:23 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:23.119539 | orchestrator | 2025-03-27 00:51:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:26.179264 | orchestrator | 2025-03-27 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:26.179513 | orchestrator | 2025-03-27 00:51:26 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:26.179611 | orchestrator | 2025-03-27 00:51:26 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:26.181347 | orchestrator | 2025-03-27 00:51:26 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:26.181976 | orchestrator | 2025-03-27 00:51:26 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:26.182599 | orchestrator | 2025-03-27 00:51:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:26.182678 | orchestrator | 2025-03-27 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:29.222795 | orchestrator | 2025-03-27 00:51:29 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:29.224572 | orchestrator | 2025-03-27 00:51:29 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:29.225236 | orchestrator | 2025-03-27 00:51:29 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:29.225967 | orchestrator | 2025-03-27 00:51:29 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:29.226731 | orchestrator | 2025-03-27 00:51:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:32.272261 | orchestrator | 2025-03-27 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:32.272447 | orchestrator | 2025-03-27 00:51:32 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:32.272737 | orchestrator | 2025-03-27 00:51:32 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:32.274167 | orchestrator | 2025-03-27 00:51:32 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:32.275455 | orchestrator | 2025-03-27 00:51:32 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:32.277171 | orchestrator | 2025-03-27 00:51:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:35.320409 | orchestrator | 2025-03-27 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:35.320537 | orchestrator | 2025-03-27 00:51:35 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:35.321888 | orchestrator | 2025-03-27 00:51:35 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:35.322654 | orchestrator | 2025-03-27 00:51:35 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:35.323836 | orchestrator | 2025-03-27 00:51:35 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:35.324482 | orchestrator | 2025-03-27 00:51:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:38.368496 | orchestrator | 2025-03-27 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:38.368641 | orchestrator | 2025-03-27 00:51:38 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:38.369100 | orchestrator | 2025-03-27 00:51:38 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:38.369130 | orchestrator | 2025-03-27 00:51:38 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:38.369150 | orchestrator | 2025-03-27 00:51:38 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:38.371013 | orchestrator | 2025-03-27 00:51:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:41.412484 | orchestrator | 2025-03-27 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:41.412582 | orchestrator | 2025-03-27 00:51:41 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:41.412736 | orchestrator | 2025-03-27 00:51:41 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:41.413705 | orchestrator | 2025-03-27 00:51:41 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:41.414799 | orchestrator | 2025-03-27 00:51:41 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:41.415872 | orchestrator | 2025-03-27 00:51:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:44.462856 | orchestrator | 2025-03-27 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:44.462986 | orchestrator | 2025-03-27 00:51:44 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:44.464559 | orchestrator | 2025-03-27 00:51:44 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:44.466242 | orchestrator | 2025-03-27 00:51:44 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:44.467672 | orchestrator | 2025-03-27 00:51:44 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:44.469252 | orchestrator | 2025-03-27 00:51:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:44.469819 | orchestrator | 2025-03-27 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:47.519345 | orchestrator | 2025-03-27 00:51:47 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:47.520356 | orchestrator | 2025-03-27 00:51:47 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:47.523480 | orchestrator | 2025-03-27 00:51:47 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:47.523513 | orchestrator | 2025-03-27 00:51:47 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:47.524462 | orchestrator | 2025-03-27 00:51:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:50.580858 | orchestrator | 2025-03-27 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:50.580993 | orchestrator | 2025-03-27 00:51:50 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:50.583055 | orchestrator | 2025-03-27 00:51:50 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:50.583954 | orchestrator | 2025-03-27 00:51:50 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:50.584010 | orchestrator | 2025-03-27 00:51:50 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:50.584306 | orchestrator | 2025-03-27 00:51:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:50.584422 | orchestrator | 2025-03-27 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:53.611490 | orchestrator | 2025-03-27 00:51:53 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:53.614423 | orchestrator | 2025-03-27 00:51:53 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:53.615364 | orchestrator | 2025-03-27 00:51:53 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:53.616026 | orchestrator | 2025-03-27 00:51:53 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:53.616797 | orchestrator | 2025-03-27 00:51:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:56.675719 | orchestrator | 2025-03-27 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:56.675847 | orchestrator | 2025-03-27 00:51:56 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:56.677251 | orchestrator | 2025-03-27 00:51:56 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:56.679184 | orchestrator | 2025-03-27 00:51:56 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:51:56.679972 | orchestrator | 2025-03-27 00:51:56 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:51:56.681809 | orchestrator | 2025-03-27 00:51:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:51:56.684133 | orchestrator | 2025-03-27 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:51:59.730604 | orchestrator | 2025-03-27 00:51:59 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:51:59.730966 | orchestrator | 2025-03-27 00:51:59 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:51:59.734895 | orchestrator | 2025-03-27 00:51:59 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:52:02.790233 | orchestrator | 2025-03-27 00:51:59 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:02.790387 | orchestrator | 2025-03-27 00:51:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:02.790406 | orchestrator | 2025-03-27 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:02.790435 | orchestrator | 2025-03-27 00:52:02 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:02.791590 | orchestrator | 2025-03-27 00:52:02 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:02.795582 | orchestrator | 2025-03-27 00:52:02 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state STARTED 2025-03-27 00:52:02.799585 | orchestrator | 2025-03-27 00:52:02 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:02.801177 | orchestrator | 2025-03-27 00:52:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:05.843437 | orchestrator | 2025-03-27 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:05.843567 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:05.848186 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:05.848783 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:05.849992 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task 706c664a-3023-408c-99ac-515b1b0b6360 is in state SUCCESS 2025-03-27 00:52:05.851973 | orchestrator | 2025-03-27 00:52:05.852021 | orchestrator | 2025-03-27 00:52:05.852036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:52:05.852052 | orchestrator | 2025-03-27 00:52:05.852066 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:52:05.852081 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.409) 0:00:00.409 ******** 2025-03-27 00:52:05.852094 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:52:05.852110 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:52:05.852124 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:52:05.852137 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:52:05.852151 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:52:05.852164 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:52:05.852178 | orchestrator | 2025-03-27 00:52:05.852192 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:52:05.852206 | orchestrator | Thursday 27 March 2025 00:50:38 +0000 (0:00:01.014) 0:00:01.423 ******** 2025-03-27 00:52:05.852220 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852234 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852248 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852262 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852276 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852326 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-27 00:52:05.852341 | orchestrator | 2025-03-27 00:52:05.852355 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-27 00:52:05.852369 | orchestrator | 2025-03-27 00:52:05.852383 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-27 00:52:05.852417 | orchestrator | Thursday 27 March 2025 00:50:40 +0000 (0:00:02.508) 0:00:03.932 ******** 2025-03-27 00:52:05.852433 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 00:52:05.852448 | orchestrator | 2025-03-27 00:52:05.852462 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-27 00:52:05.852476 | orchestrator | Thursday 27 March 2025 00:50:43 +0000 (0:00:03.007) 0:00:06.939 ******** 2025-03-27 00:52:05.852489 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-27 00:52:05.852503 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-27 00:52:05.852517 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-27 00:52:05.852531 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-27 00:52:05.852547 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-27 00:52:05.852563 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-27 00:52:05.852579 | orchestrator | 2025-03-27 00:52:05.852594 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-27 00:52:05.852610 | orchestrator | Thursday 27 March 2025 00:50:45 +0000 (0:00:02.103) 0:00:09.043 ******** 2025-03-27 00:52:05.852626 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-27 00:52:05.852647 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-27 00:52:05.852664 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-27 00:52:05.852680 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-27 00:52:05.852695 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-27 00:52:05.852711 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-27 00:52:05.852727 | orchestrator | 2025-03-27 00:52:05.852743 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-27 00:52:05.852759 | orchestrator | Thursday 27 March 2025 00:50:48 +0000 (0:00:03.015) 0:00:12.059 ******** 2025-03-27 00:52:05.852775 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-27 00:52:05.852791 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:52:05.852807 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-27 00:52:05.852823 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:52:05.852839 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-27 00:52:05.852855 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:52:05.852870 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-27 00:52:05.852886 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:52:05.852900 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-27 00:52:05.852913 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:52:05.852927 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-27 00:52:05.852941 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:52:05.852955 | orchestrator | 2025-03-27 00:52:05.852969 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-27 00:52:05.852983 | orchestrator | Thursday 27 March 2025 00:50:50 +0000 (0:00:02.054) 0:00:14.114 ******** 2025-03-27 00:52:05.852997 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:52:05.853010 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:52:05.853024 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:52:05.853037 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:52:05.853051 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:52:05.853064 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:52:05.853078 | orchestrator | 2025-03-27 00:52:05.853092 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-27 00:52:05.853105 | orchestrator | Thursday 27 March 2025 00:50:51 +0000 (0:00:00.981) 0:00:15.096 ******** 2025-03-27 00:52:05.853134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853375 | orchestrator | 2025-03-27 00:52:05.853389 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-27 00:52:05.853403 | orchestrator | Thursday 27 March 2025 00:50:54 +0000 (0:00:02.762) 0:00:17.858 ******** 2025-03-27 00:52:05.853418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.853656 | orchestrator | 2025-03-27 00:52:05.853671 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-27 00:52:05.853685 | orchestrator | Thursday 27 March 2025 00:50:58 +0000 (0:00:03.726) 0:00:21.584 ******** 2025-03-27 00:52:05.853700 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:52:05.853714 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:52:05.853728 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:52:05.853742 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:52:05.853756 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:52:05.853770 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:52:05.853784 | orchestrator | 2025-03-27 00:52:05.853798 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-27 00:52:05.853812 | orchestrator | Thursday 27 March 2025 00:51:01 +0000 (0:00:03.401) 0:00:24.985 ******** 2025-03-27 00:52:05.853826 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:52:05.853840 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:52:05.853854 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:52:05.853868 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:52:05.853882 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:52:05.853896 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:52:05.853909 | orchestrator | 2025-03-27 00:52:05.853924 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-27 00:52:05.853938 | orchestrator | Thursday 27 March 2025 00:51:07 +0000 (0:00:05.603) 0:00:30.589 ******** 2025-03-27 00:52:05.853952 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:52:05.853965 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:52:05.853979 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:52:05.853993 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:52:05.854007 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:52:05.854094 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:52:05.854111 | orchestrator | 2025-03-27 00:52:05.854125 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-27 00:52:05.854140 | orchestrator | Thursday 27 March 2025 00:51:10 +0000 (0:00:02.659) 0:00:33.249 ******** 2025-03-27 00:52:05.854154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-27 00:52:05.854469 | orchestrator | 2025-03-27 00:52:05.854483 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854497 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:03.591) 0:00:36.840 ******** 2025-03-27 00:52:05.854509 | orchestrator | 2025-03-27 00:52:05.854522 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854534 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:00.352) 0:00:37.192 ******** 2025-03-27 00:52:05.854546 | orchestrator | 2025-03-27 00:52:05.854559 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854571 | orchestrator | Thursday 27 March 2025 00:51:14 +0000 (0:00:00.480) 0:00:37.672 ******** 2025-03-27 00:52:05.854583 | orchestrator | 2025-03-27 00:52:05.854596 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854608 | orchestrator | Thursday 27 March 2025 00:51:14 +0000 (0:00:00.170) 0:00:37.843 ******** 2025-03-27 00:52:05.854621 | orchestrator | 2025-03-27 00:52:05.854637 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854650 | orchestrator | Thursday 27 March 2025 00:51:15 +0000 (0:00:00.398) 0:00:38.241 ******** 2025-03-27 00:52:05.854662 | orchestrator | 2025-03-27 00:52:05.854675 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-27 00:52:05.854687 | orchestrator | Thursday 27 March 2025 00:51:15 +0000 (0:00:00.141) 0:00:38.382 ******** 2025-03-27 00:52:05.854699 | orchestrator | 2025-03-27 00:52:05.854712 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-27 00:52:05.854724 | orchestrator | Thursday 27 March 2025 00:51:15 +0000 (0:00:00.397) 0:00:38.780 ******** 2025-03-27 00:52:05.854736 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:52:05.854749 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:52:05.854761 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:52:05.854774 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:52:05.854786 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:52:05.854798 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:52:05.854810 | orchestrator | 2025-03-27 00:52:05.854823 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-27 00:52:05.854836 | orchestrator | Thursday 27 March 2025 00:51:26 +0000 (0:00:11.197) 0:00:49.977 ******** 2025-03-27 00:52:05.854853 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:52:05.854866 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:52:05.854879 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:52:05.854891 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:52:05.854903 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:52:05.854916 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:52:05.854928 | orchestrator | 2025-03-27 00:52:05.854940 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-27 00:52:05.854953 | orchestrator | Thursday 27 March 2025 00:51:29 +0000 (0:00:02.425) 0:00:52.402 ******** 2025-03-27 00:52:05.854966 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:52:05.854979 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:52:05.854998 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:52:05.855012 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:52:05.855025 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:52:05.855037 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:52:05.855050 | orchestrator | 2025-03-27 00:52:05.855062 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-27 00:52:05.855080 | orchestrator | Thursday 27 March 2025 00:51:39 +0000 (0:00:10.048) 0:01:02.451 ******** 2025-03-27 00:52:05.855093 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-27 00:52:05.855106 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-27 00:52:05.855119 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-27 00:52:05.855131 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-27 00:52:05.855144 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-27 00:52:05.855156 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-27 00:52:05.855168 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-27 00:52:05.855181 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-27 00:52:05.855197 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-27 00:52:05.855210 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-27 00:52:05.855222 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-27 00:52:05.855234 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-27 00:52:05.855247 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855259 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855271 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855298 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855311 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855323 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-27 00:52:05.855336 | orchestrator | 2025-03-27 00:52:05.855348 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-27 00:52:05.855360 | orchestrator | Thursday 27 March 2025 00:51:47 +0000 (0:00:08.484) 0:01:10.935 ******** 2025-03-27 00:52:05.855373 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-27 00:52:05.855385 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:52:05.855398 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-27 00:52:05.855410 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:52:05.855423 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-27 00:52:05.855435 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:52:05.855447 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-27 00:52:05.855460 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-27 00:52:05.855472 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-27 00:52:05.855484 | orchestrator | 2025-03-27 00:52:05.855497 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-27 00:52:05.855509 | orchestrator | Thursday 27 March 2025 00:51:50 +0000 (0:00:02.465) 0:01:13.401 ******** 2025-03-27 00:52:05.855522 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-27 00:52:05.855543 | orchestrator | skipping: [testbed-node-3] 2025-03-27 00:52:05.855556 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-27 00:52:05.855568 | orchestrator | skipping: [testbed-node-4] 2025-03-27 00:52:05.855580 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-27 00:52:05.855593 | orchestrator | skipping: [testbed-node-5] 2025-03-27 00:52:05.855605 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-27 00:52:05.855624 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-27 00:52:08.891144 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-27 00:52:08.891265 | orchestrator | 2025-03-27 00:52:08.891330 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-27 00:52:08.891348 | orchestrator | Thursday 27 March 2025 00:51:53 +0000 (0:00:03.722) 0:01:17.123 ******** 2025-03-27 00:52:08.891363 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:52:08.891378 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:52:08.891392 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:52:08.891406 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:52:08.891420 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:52:08.891434 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:52:08.891447 | orchestrator | 2025-03-27 00:52:08.891461 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:52:08.891477 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:52:08.891492 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:52:08.891547 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 00:52:08.891563 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:52:08.891577 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:52:08.891608 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 00:52:08.891622 | orchestrator | 2025-03-27 00:52:08.891636 | orchestrator | 2025-03-27 00:52:08.891651 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:52:08.891665 | orchestrator | Thursday 27 March 2025 00:52:02 +0000 (0:00:08.930) 0:01:26.054 ******** 2025-03-27 00:52:08.891679 | orchestrator | =============================================================================== 2025-03-27 00:52:08.891694 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.98s 2025-03-27 00:52:08.891708 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.20s 2025-03-27 00:52:08.891724 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.48s 2025-03-27 00:52:08.891740 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 5.60s 2025-03-27 00:52:08.891757 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.73s 2025-03-27 00:52:08.891774 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.72s 2025-03-27 00:52:08.891790 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.59s 2025-03-27 00:52:08.891806 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.40s 2025-03-27 00:52:08.891821 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.02s 2025-03-27 00:52:08.891837 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.01s 2025-03-27 00:52:08.891881 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.76s 2025-03-27 00:52:08.891897 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.66s 2025-03-27 00:52:08.891913 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.51s 2025-03-27 00:52:08.891928 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.47s 2025-03-27 00:52:08.891944 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.43s 2025-03-27 00:52:08.891960 | orchestrator | module-load : Load modules ---------------------------------------------- 2.10s 2025-03-27 00:52:08.891976 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.05s 2025-03-27 00:52:08.891992 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.94s 2025-03-27 00:52:08.892008 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-03-27 00:52:08.892023 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.98s 2025-03-27 00:52:08.892040 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:08.892056 | orchestrator | 2025-03-27 00:52:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:08.892073 | orchestrator | 2025-03-27 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:08.892104 | orchestrator | 2025-03-27 00:52:08 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:08.892242 | orchestrator | 2025-03-27 00:52:08 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:08.892268 | orchestrator | 2025-03-27 00:52:08 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:08.892961 | orchestrator | 2025-03-27 00:52:08 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:08.892995 | orchestrator | 2025-03-27 00:52:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:11.939681 | orchestrator | 2025-03-27 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:11.939817 | orchestrator | 2025-03-27 00:52:11 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:11.947420 | orchestrator | 2025-03-27 00:52:11 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:11.948190 | orchestrator | 2025-03-27 00:52:11 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:11.948224 | orchestrator | 2025-03-27 00:52:11 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:11.950314 | orchestrator | 2025-03-27 00:52:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:14.989050 | orchestrator | 2025-03-27 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:14.989185 | orchestrator | 2025-03-27 00:52:14 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:14.991258 | orchestrator | 2025-03-27 00:52:14 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:14.991806 | orchestrator | 2025-03-27 00:52:14 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:14.995482 | orchestrator | 2025-03-27 00:52:14 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:14.996436 | orchestrator | 2025-03-27 00:52:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:18.057102 | orchestrator | 2025-03-27 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:18.057255 | orchestrator | 2025-03-27 00:52:18 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:18.057751 | orchestrator | 2025-03-27 00:52:18 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:18.057788 | orchestrator | 2025-03-27 00:52:18 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:18.062703 | orchestrator | 2025-03-27 00:52:18 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:18.063174 | orchestrator | 2025-03-27 00:52:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:18.063536 | orchestrator | 2025-03-27 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:21.111220 | orchestrator | 2025-03-27 00:52:21 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:21.113470 | orchestrator | 2025-03-27 00:52:21 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:21.113511 | orchestrator | 2025-03-27 00:52:21 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:21.113964 | orchestrator | 2025-03-27 00:52:21 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:21.117262 | orchestrator | 2025-03-27 00:52:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:24.160967 | orchestrator | 2025-03-27 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:24.161094 | orchestrator | 2025-03-27 00:52:24 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:24.161486 | orchestrator | 2025-03-27 00:52:24 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:24.162358 | orchestrator | 2025-03-27 00:52:24 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:24.163507 | orchestrator | 2025-03-27 00:52:24 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:24.164691 | orchestrator | 2025-03-27 00:52:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:27.208985 | orchestrator | 2025-03-27 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:27.209122 | orchestrator | 2025-03-27 00:52:27 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:27.209561 | orchestrator | 2025-03-27 00:52:27 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:27.210095 | orchestrator | 2025-03-27 00:52:27 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:27.210680 | orchestrator | 2025-03-27 00:52:27 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:27.212528 | orchestrator | 2025-03-27 00:52:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:30.261078 | orchestrator | 2025-03-27 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:30.261221 | orchestrator | 2025-03-27 00:52:30 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:30.262183 | orchestrator | 2025-03-27 00:52:30 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:30.263649 | orchestrator | 2025-03-27 00:52:30 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:30.264200 | orchestrator | 2025-03-27 00:52:30 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:30.265326 | orchestrator | 2025-03-27 00:52:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:30.266923 | orchestrator | 2025-03-27 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:33.310476 | orchestrator | 2025-03-27 00:52:33 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:33.311573 | orchestrator | 2025-03-27 00:52:33 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:33.312867 | orchestrator | 2025-03-27 00:52:33 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:33.314068 | orchestrator | 2025-03-27 00:52:33 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:33.315092 | orchestrator | 2025-03-27 00:52:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:36.363645 | orchestrator | 2025-03-27 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:36.363789 | orchestrator | 2025-03-27 00:52:36 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:36.365124 | orchestrator | 2025-03-27 00:52:36 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:36.367249 | orchestrator | 2025-03-27 00:52:36 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:36.368669 | orchestrator | 2025-03-27 00:52:36 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:36.371088 | orchestrator | 2025-03-27 00:52:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:36.371250 | orchestrator | 2025-03-27 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:39.412950 | orchestrator | 2025-03-27 00:52:39 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:39.414095 | orchestrator | 2025-03-27 00:52:39 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:39.414141 | orchestrator | 2025-03-27 00:52:39 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:39.414515 | orchestrator | 2025-03-27 00:52:39 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:39.415425 | orchestrator | 2025-03-27 00:52:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:42.459113 | orchestrator | 2025-03-27 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:42.459264 | orchestrator | 2025-03-27 00:52:42 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:42.462978 | orchestrator | 2025-03-27 00:52:42 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:42.466291 | orchestrator | 2025-03-27 00:52:42 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:42.469965 | orchestrator | 2025-03-27 00:52:42 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:42.471654 | orchestrator | 2025-03-27 00:52:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:45.529954 | orchestrator | 2025-03-27 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:45.530133 | orchestrator | 2025-03-27 00:52:45 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:45.533647 | orchestrator | 2025-03-27 00:52:45 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:45.539055 | orchestrator | 2025-03-27 00:52:45 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:45.543183 | orchestrator | 2025-03-27 00:52:45 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:45.544030 | orchestrator | 2025-03-27 00:52:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:48.602349 | orchestrator | 2025-03-27 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:48.602604 | orchestrator | 2025-03-27 00:52:48 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:48.602711 | orchestrator | 2025-03-27 00:52:48 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:48.603730 | orchestrator | 2025-03-27 00:52:48 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:48.604707 | orchestrator | 2025-03-27 00:52:48 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:48.606141 | orchestrator | 2025-03-27 00:52:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:51.644509 | orchestrator | 2025-03-27 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:51.644651 | orchestrator | 2025-03-27 00:52:51 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:51.645526 | orchestrator | 2025-03-27 00:52:51 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:51.647323 | orchestrator | 2025-03-27 00:52:51 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:51.648675 | orchestrator | 2025-03-27 00:52:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:51.650435 | orchestrator | 2025-03-27 00:52:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:51.650565 | orchestrator | 2025-03-27 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:54.703526 | orchestrator | 2025-03-27 00:52:54 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:54.705528 | orchestrator | 2025-03-27 00:52:54 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:54.706161 | orchestrator | 2025-03-27 00:52:54 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:54.708758 | orchestrator | 2025-03-27 00:52:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:54.710179 | orchestrator | 2025-03-27 00:52:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:52:57.764274 | orchestrator | 2025-03-27 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:52:57.764478 | orchestrator | 2025-03-27 00:52:57 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:52:57.765965 | orchestrator | 2025-03-27 00:52:57 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:52:57.768093 | orchestrator | 2025-03-27 00:52:57 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:52:57.770120 | orchestrator | 2025-03-27 00:52:57 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:52:57.771507 | orchestrator | 2025-03-27 00:52:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:00.820581 | orchestrator | 2025-03-27 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:00.820716 | orchestrator | 2025-03-27 00:53:00 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:00.822157 | orchestrator | 2025-03-27 00:53:00 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:00.823176 | orchestrator | 2025-03-27 00:53:00 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:00.830979 | orchestrator | 2025-03-27 00:53:00 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:00.833139 | orchestrator | 2025-03-27 00:53:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:00.833465 | orchestrator | 2025-03-27 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:03.882281 | orchestrator | 2025-03-27 00:53:03 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:03.883157 | orchestrator | 2025-03-27 00:53:03 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:03.884845 | orchestrator | 2025-03-27 00:53:03 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:03.887059 | orchestrator | 2025-03-27 00:53:03 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:03.890330 | orchestrator | 2025-03-27 00:53:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:06.934147 | orchestrator | 2025-03-27 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:06.934267 | orchestrator | 2025-03-27 00:53:06 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:06.937227 | orchestrator | 2025-03-27 00:53:06 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:06.939038 | orchestrator | 2025-03-27 00:53:06 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:06.941497 | orchestrator | 2025-03-27 00:53:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:06.943164 | orchestrator | 2025-03-27 00:53:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:06.943518 | orchestrator | 2025-03-27 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:09.988776 | orchestrator | 2025-03-27 00:53:09 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:09.988985 | orchestrator | 2025-03-27 00:53:09 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:09.989567 | orchestrator | 2025-03-27 00:53:09 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:09.990208 | orchestrator | 2025-03-27 00:53:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:09.991030 | orchestrator | 2025-03-27 00:53:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:13.032220 | orchestrator | 2025-03-27 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:13.032410 | orchestrator | 2025-03-27 00:53:13 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:13.034065 | orchestrator | 2025-03-27 00:53:13 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:13.036628 | orchestrator | 2025-03-27 00:53:13 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:13.037135 | orchestrator | 2025-03-27 00:53:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:13.038211 | orchestrator | 2025-03-27 00:53:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:13.038775 | orchestrator | 2025-03-27 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:16.084822 | orchestrator | 2025-03-27 00:53:16 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:16.087077 | orchestrator | 2025-03-27 00:53:16 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:16.089009 | orchestrator | 2025-03-27 00:53:16 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:16.089022 | orchestrator | 2025-03-27 00:53:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:16.089031 | orchestrator | 2025-03-27 00:53:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:19.120540 | orchestrator | 2025-03-27 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:19.120695 | orchestrator | 2025-03-27 00:53:19 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:19.120823 | orchestrator | 2025-03-27 00:53:19 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:19.121443 | orchestrator | 2025-03-27 00:53:19 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:19.122141 | orchestrator | 2025-03-27 00:53:19 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:19.122649 | orchestrator | 2025-03-27 00:53:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:19.122741 | orchestrator | 2025-03-27 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:22.152113 | orchestrator | 2025-03-27 00:53:22 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:22.154219 | orchestrator | 2025-03-27 00:53:22 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:22.155461 | orchestrator | 2025-03-27 00:53:22 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:22.155488 | orchestrator | 2025-03-27 00:53:22 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:22.156247 | orchestrator | 2025-03-27 00:53:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:25.199389 | orchestrator | 2025-03-27 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:25.199516 | orchestrator | 2025-03-27 00:53:25 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:25.199958 | orchestrator | 2025-03-27 00:53:25 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:25.200761 | orchestrator | 2025-03-27 00:53:25 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:25.201270 | orchestrator | 2025-03-27 00:53:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:25.202306 | orchestrator | 2025-03-27 00:53:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:28.255512 | orchestrator | 2025-03-27 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:28.255641 | orchestrator | 2025-03-27 00:53:28 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state STARTED 2025-03-27 00:53:28.257224 | orchestrator | 2025-03-27 00:53:28 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:28.259601 | orchestrator | 2025-03-27 00:53:28 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:28.267708 | orchestrator | 2025-03-27 00:53:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:28.268635 | orchestrator | 2025-03-27 00:53:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:31.305742 | orchestrator | 2025-03-27 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:31.305889 | orchestrator | 2025-03-27 00:53:31.305910 | orchestrator | 2025-03-27 00:53:31.305926 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-27 00:53:31.305942 | orchestrator | 2025-03-27 00:53:31.305956 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-27 00:53:31.305971 | orchestrator | Thursday 27 March 2025 00:51:01 +0000 (0:00:00.271) 0:00:00.271 ******** 2025-03-27 00:53:31.305985 | orchestrator | ok: [localhost] => { 2025-03-27 00:53:31.306002 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-27 00:53:31.306063 | orchestrator | } 2025-03-27 00:53:31.306079 | orchestrator | 2025-03-27 00:53:31.306094 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-27 00:53:31.306108 | orchestrator | Thursday 27 March 2025 00:51:01 +0000 (0:00:00.090) 0:00:00.362 ******** 2025-03-27 00:53:31.306123 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-27 00:53:31.306139 | orchestrator | ...ignoring 2025-03-27 00:53:31.306153 | orchestrator | 2025-03-27 00:53:31.306166 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-27 00:53:31.306180 | orchestrator | Thursday 27 March 2025 00:51:04 +0000 (0:00:03.156) 0:00:03.520 ******** 2025-03-27 00:53:31.306194 | orchestrator | skipping: [localhost] 2025-03-27 00:53:31.306208 | orchestrator | 2025-03-27 00:53:31.306222 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-27 00:53:31.306235 | orchestrator | Thursday 27 March 2025 00:51:04 +0000 (0:00:00.145) 0:00:03.666 ******** 2025-03-27 00:53:31.306249 | orchestrator | ok: [localhost] 2025-03-27 00:53:31.306263 | orchestrator | 2025-03-27 00:53:31.306279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:53:31.306294 | orchestrator | 2025-03-27 00:53:31.306310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:53:31.306346 | orchestrator | Thursday 27 March 2025 00:51:05 +0000 (0:00:00.431) 0:00:04.097 ******** 2025-03-27 00:53:31.306363 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:53:31.306378 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:53:31.306393 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:53:31.306409 | orchestrator | 2025-03-27 00:53:31.306425 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:53:31.306440 | orchestrator | Thursday 27 March 2025 00:51:06 +0000 (0:00:01.130) 0:00:05.228 ******** 2025-03-27 00:53:31.306456 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-27 00:53:31.306471 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-27 00:53:31.306487 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-27 00:53:31.306502 | orchestrator | 2025-03-27 00:53:31.306517 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-27 00:53:31.306533 | orchestrator | 2025-03-27 00:53:31.306548 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-27 00:53:31.306564 | orchestrator | Thursday 27 March 2025 00:51:07 +0000 (0:00:01.277) 0:00:06.505 ******** 2025-03-27 00:53:31.306579 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:53:31.306595 | orchestrator | 2025-03-27 00:53:31.306610 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-27 00:53:31.306626 | orchestrator | Thursday 27 March 2025 00:51:09 +0000 (0:00:02.188) 0:00:08.694 ******** 2025-03-27 00:53:31.306664 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:53:31.306679 | orchestrator | 2025-03-27 00:53:31.306693 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-27 00:53:31.306706 | orchestrator | Thursday 27 March 2025 00:51:11 +0000 (0:00:02.088) 0:00:10.783 ******** 2025-03-27 00:53:31.306720 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.306735 | orchestrator | 2025-03-27 00:53:31.306841 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-27 00:53:31.306863 | orchestrator | Thursday 27 March 2025 00:51:12 +0000 (0:00:00.376) 0:00:11.159 ******** 2025-03-27 00:53:31.306877 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.306891 | orchestrator | 2025-03-27 00:53:31.306905 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-27 00:53:31.306919 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:00.771) 0:00:11.931 ******** 2025-03-27 00:53:31.306932 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.306946 | orchestrator | 2025-03-27 00:53:31.306960 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-27 00:53:31.306973 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:00.737) 0:00:12.668 ******** 2025-03-27 00:53:31.306987 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.307001 | orchestrator | 2025-03-27 00:53:31.307015 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-27 00:53:31.307028 | orchestrator | Thursday 27 March 2025 00:51:14 +0000 (0:00:00.818) 0:00:13.486 ******** 2025-03-27 00:53:31.307042 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:53:31.307056 | orchestrator | 2025-03-27 00:53:31.307070 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-27 00:53:31.307083 | orchestrator | Thursday 27 March 2025 00:51:15 +0000 (0:00:01.311) 0:00:14.798 ******** 2025-03-27 00:53:31.307097 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:53:31.307111 | orchestrator | 2025-03-27 00:53:31.307125 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-27 00:53:31.307138 | orchestrator | Thursday 27 March 2025 00:51:17 +0000 (0:00:01.198) 0:00:15.997 ******** 2025-03-27 00:53:31.307152 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.307166 | orchestrator | 2025-03-27 00:53:31.307179 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-27 00:53:31.307193 | orchestrator | Thursday 27 March 2025 00:51:17 +0000 (0:00:00.661) 0:00:16.658 ******** 2025-03-27 00:53:31.307207 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.307221 | orchestrator | 2025-03-27 00:53:31.307244 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-27 00:53:31.307258 | orchestrator | Thursday 27 March 2025 00:51:18 +0000 (0:00:00.707) 0:00:17.365 ******** 2025-03-27 00:53:31.307275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307371 | orchestrator | 2025-03-27 00:53:31.307385 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-27 00:53:31.307399 | orchestrator | Thursday 27 March 2025 00:51:20 +0000 (0:00:01.829) 0:00:19.195 ******** 2025-03-27 00:53:31.307425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.307478 | orchestrator | 2025-03-27 00:53:31.307492 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-27 00:53:31.307506 | orchestrator | Thursday 27 March 2025 00:51:22 +0000 (0:00:02.024) 0:00:21.220 ******** 2025-03-27 00:53:31.307520 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-27 00:53:31.307544 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-27 00:53:31.307559 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-27 00:53:31.307573 | orchestrator | 2025-03-27 00:53:31.307587 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-27 00:53:31.307601 | orchestrator | Thursday 27 March 2025 00:51:24 +0000 (0:00:02.052) 0:00:23.272 ******** 2025-03-27 00:53:31.307615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-27 00:53:31.307629 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-27 00:53:31.307643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-27 00:53:31.307657 | orchestrator | 2025-03-27 00:53:31.307670 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-27 00:53:31.307684 | orchestrator | Thursday 27 March 2025 00:51:27 +0000 (0:00:03.049) 0:00:26.322 ******** 2025-03-27 00:53:31.307698 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-27 00:53:31.307712 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-27 00:53:31.307725 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-27 00:53:31.307739 | orchestrator | 2025-03-27 00:53:31.307759 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-27 00:53:31.307773 | orchestrator | Thursday 27 March 2025 00:51:30 +0000 (0:00:02.731) 0:00:29.053 ******** 2025-03-27 00:53:31.307787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-27 00:53:31.307801 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-27 00:53:31.307815 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-27 00:53:31.307836 | orchestrator | 2025-03-27 00:53:31.307850 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-27 00:53:31.307863 | orchestrator | Thursday 27 March 2025 00:51:33 +0000 (0:00:03.452) 0:00:32.506 ******** 2025-03-27 00:53:31.307877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-27 00:53:31.307891 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-27 00:53:31.307905 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-27 00:53:31.307919 | orchestrator | 2025-03-27 00:53:31.307933 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-27 00:53:31.307951 | orchestrator | Thursday 27 March 2025 00:51:35 +0000 (0:00:01.970) 0:00:34.477 ******** 2025-03-27 00:53:31.307965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-27 00:53:31.307979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-27 00:53:31.307993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-27 00:53:31.308007 | orchestrator | 2025-03-27 00:53:31.308021 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-27 00:53:31.308034 | orchestrator | Thursday 27 March 2025 00:51:37 +0000 (0:00:02.290) 0:00:36.767 ******** 2025-03-27 00:53:31.308048 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.308062 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:53:31.308076 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:53:31.308090 | orchestrator | 2025-03-27 00:53:31.308104 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-27 00:53:31.308117 | orchestrator | Thursday 27 March 2025 00:51:38 +0000 (0:00:00.919) 0:00:37.687 ******** 2025-03-27 00:53:31.308132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.308147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.308177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:53:31.308192 | orchestrator | 2025-03-27 00:53:31.308206 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-27 00:53:31.308220 | orchestrator | Thursday 27 March 2025 00:51:41 +0000 (0:00:02.544) 0:00:40.232 ******** 2025-03-27 00:53:31.308233 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:53:31.308247 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:53:31.308261 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:53:31.308275 | orchestrator | 2025-03-27 00:53:31.308289 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-27 00:53:31.308302 | orchestrator | Thursday 27 March 2025 00:51:42 +0000 (0:00:01.101) 0:00:41.334 ******** 2025-03-27 00:53:31.308365 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:53:31.308382 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:53:31.308396 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:53:31.308410 | orchestrator | 2025-03-27 00:53:31.308424 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-27 00:53:31.308438 | orchestrator | Thursday 27 March 2025 00:51:48 +0000 (0:00:06.062) 0:00:47.396 ******** 2025-03-27 00:53:31.308452 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:53:31.308466 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:53:31.308479 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:53:31.308493 | orchestrator | 2025-03-27 00:53:31.308507 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-27 00:53:31.308521 | orchestrator | 2025-03-27 00:53:31.308535 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-27 00:53:31.308549 | orchestrator | Thursday 27 March 2025 00:51:48 +0000 (0:00:00.402) 0:00:47.798 ******** 2025-03-27 00:53:31.308562 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:53:31.308576 | orchestrator | 2025-03-27 00:53:31.308589 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-27 00:53:31.308601 | orchestrator | Thursday 27 March 2025 00:51:49 +0000 (0:00:00.762) 0:00:48.561 ******** 2025-03-27 00:53:31.308613 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:53:31.308626 | orchestrator | 2025-03-27 00:53:31.308638 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-27 00:53:31.308650 | orchestrator | Thursday 27 March 2025 00:51:50 +0000 (0:00:00.284) 0:00:48.846 ******** 2025-03-27 00:53:31.308663 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:53:31.308675 | orchestrator | 2025-03-27 00:53:31.308687 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-27 00:53:31.308699 | orchestrator | Thursday 27 March 2025 00:51:51 +0000 (0:00:01.806) 0:00:50.652 ******** 2025-03-27 00:53:31.308712 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:53:31.308724 | orchestrator | 2025-03-27 00:53:31.308737 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-27 00:53:31.308758 | orchestrator | 2025-03-27 00:53:31.308770 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-27 00:53:31.308782 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:56.088) 0:01:46.740 ******** 2025-03-27 00:53:31.308795 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:53:31.308807 | orchestrator | 2025-03-27 00:53:31.308819 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-27 00:53:31.308832 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.722) 0:01:47.463 ******** 2025-03-27 00:53:31.308844 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:53:31.308857 | orchestrator | 2025-03-27 00:53:31.308869 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-27 00:53:31.308881 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.284) 0:01:47.748 ******** 2025-03-27 00:53:31.308893 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:53:31.308905 | orchestrator | 2025-03-27 00:53:31.308918 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-27 00:53:31.308930 | orchestrator | Thursday 27 March 2025 00:52:51 +0000 (0:00:02.167) 0:01:49.915 ******** 2025-03-27 00:53:31.308942 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:53:31.308954 | orchestrator | 2025-03-27 00:53:31.308966 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-27 00:53:31.308979 | orchestrator | 2025-03-27 00:53:31.308991 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-27 00:53:31.309003 | orchestrator | Thursday 27 March 2025 00:53:05 +0000 (0:00:14.715) 0:02:04.631 ******** 2025-03-27 00:53:31.309015 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:53:31.309034 | orchestrator | 2025-03-27 00:53:31.309050 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-27 00:53:31.309063 | orchestrator | Thursday 27 March 2025 00:53:06 +0000 (0:00:00.607) 0:02:05.239 ******** 2025-03-27 00:53:31.309075 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:53:31.309088 | orchestrator | 2025-03-27 00:53:31.309100 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-27 00:53:31.309119 | orchestrator | Thursday 27 March 2025 00:53:06 +0000 (0:00:00.260) 0:02:05.499 ******** 2025-03-27 00:53:31.309132 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:53:31.309144 | orchestrator | 2025-03-27 00:53:31.309156 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-27 00:53:31.309169 | orchestrator | Thursday 27 March 2025 00:53:13 +0000 (0:00:06.983) 0:02:12.484 ******** 2025-03-27 00:53:31.309181 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:53:31.309193 | orchestrator | 2025-03-27 00:53:31.309205 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-27 00:53:31.309218 | orchestrator | 2025-03-27 00:53:31.309230 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-27 00:53:31.309242 | orchestrator | Thursday 27 March 2025 00:53:25 +0000 (0:00:11.748) 0:02:24.233 ******** 2025-03-27 00:53:31.309255 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:53:31.309267 | orchestrator | 2025-03-27 00:53:31.309279 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-27 00:53:31.309291 | orchestrator | Thursday 27 March 2025 00:53:26 +0000 (0:00:01.228) 0:02:25.462 ******** 2025-03-27 00:53:31.309303 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-27 00:53:31.309329 | orchestrator | enable_outward_rabbitmq_True 2025-03-27 00:53:31.309343 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-27 00:53:31.309355 | orchestrator | outward_rabbitmq_restart 2025-03-27 00:53:31.309368 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:53:31.309380 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:53:31.309393 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:53:31.309405 | orchestrator | 2025-03-27 00:53:31.309417 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-27 00:53:31.309429 | orchestrator | skipping: no hosts matched 2025-03-27 00:53:31.309448 | orchestrator | 2025-03-27 00:53:31.309460 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-27 00:53:31.309472 | orchestrator | skipping: no hosts matched 2025-03-27 00:53:31.309485 | orchestrator | 2025-03-27 00:53:31.309497 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-27 00:53:31.309509 | orchestrator | skipping: no hosts matched 2025-03-27 00:53:31.309522 | orchestrator | 2025-03-27 00:53:31.309534 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:53:31.309547 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-27 00:53:31.309560 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-27 00:53:31.309572 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:53:31.309585 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 00:53:31.309598 | orchestrator | 2025-03-27 00:53:31.309610 | orchestrator | 2025-03-27 00:53:31.309622 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:53:31.309634 | orchestrator | Thursday 27 March 2025 00:53:29 +0000 (0:00:03.114) 0:02:28.576 ******** 2025-03-27 00:53:31.309647 | orchestrator | =============================================================================== 2025-03-27 00:53:31.309708 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.55s 2025-03-27 00:53:31.309721 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.96s 2025-03-27 00:53:31.309734 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.06s 2025-03-27 00:53:31.309746 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.45s 2025-03-27 00:53:31.309758 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.16s 2025-03-27 00:53:31.309770 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.11s 2025-03-27 00:53:31.309783 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.05s 2025-03-27 00:53:31.309795 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.73s 2025-03-27 00:53:31.309807 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.55s 2025-03-27 00:53:31.309819 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.29s 2025-03-27 00:53:31.309832 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.19s 2025-03-27 00:53:31.309844 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.09s 2025-03-27 00:53:31.309856 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.09s 2025-03-27 00:53:31.309873 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.05s 2025-03-27 00:53:31.309886 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.02s 2025-03-27 00:53:31.309898 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.97s 2025-03-27 00:53:31.309911 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.83s 2025-03-27 00:53:31.309923 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.32s 2025-03-27 00:53:31.309935 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2025-03-27 00:53:31.309947 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.23s 2025-03-27 00:53:31.309966 | orchestrator | 2025-03-27 00:53:31 | INFO  | Task c0c9af98-96a7-4315-ac8a-9b86aed2b933 is in state SUCCESS 2025-03-27 00:53:31.310102 | orchestrator | 2025-03-27 00:53:31 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:31.312428 | orchestrator | 2025-03-27 00:53:31 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:31.312807 | orchestrator | 2025-03-27 00:53:31 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:31.314812 | orchestrator | 2025-03-27 00:53:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:31.315194 | orchestrator | 2025-03-27 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:34.381894 | orchestrator | 2025-03-27 00:53:34 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:34.384613 | orchestrator | 2025-03-27 00:53:34 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:34.385810 | orchestrator | 2025-03-27 00:53:34 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:34.387238 | orchestrator | 2025-03-27 00:53:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:34.387585 | orchestrator | 2025-03-27 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:37.431918 | orchestrator | 2025-03-27 00:53:37 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:37.434201 | orchestrator | 2025-03-27 00:53:37 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:37.436477 | orchestrator | 2025-03-27 00:53:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:37.440141 | orchestrator | 2025-03-27 00:53:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:37.440628 | orchestrator | 2025-03-27 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:40.484778 | orchestrator | 2025-03-27 00:53:40 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:40.485620 | orchestrator | 2025-03-27 00:53:40 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:40.485669 | orchestrator | 2025-03-27 00:53:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:40.486522 | orchestrator | 2025-03-27 00:53:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:40.486933 | orchestrator | 2025-03-27 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:43.537806 | orchestrator | 2025-03-27 00:53:43 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:43.539923 | orchestrator | 2025-03-27 00:53:43 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:43.542005 | orchestrator | 2025-03-27 00:53:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:43.544282 | orchestrator | 2025-03-27 00:53:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:43.545434 | orchestrator | 2025-03-27 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:46.592780 | orchestrator | 2025-03-27 00:53:46 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:46.594223 | orchestrator | 2025-03-27 00:53:46 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:46.595405 | orchestrator | 2025-03-27 00:53:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:46.596980 | orchestrator | 2025-03-27 00:53:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:49.646412 | orchestrator | 2025-03-27 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:49.647128 | orchestrator | 2025-03-27 00:53:49 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:49.647985 | orchestrator | 2025-03-27 00:53:49 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:49.650283 | orchestrator | 2025-03-27 00:53:49 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:52.692843 | orchestrator | 2025-03-27 00:53:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:52.692965 | orchestrator | 2025-03-27 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:52.693002 | orchestrator | 2025-03-27 00:53:52 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:52.694547 | orchestrator | 2025-03-27 00:53:52 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:52.698896 | orchestrator | 2025-03-27 00:53:52 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:52.702723 | orchestrator | 2025-03-27 00:53:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:55.746621 | orchestrator | 2025-03-27 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:55.746748 | orchestrator | 2025-03-27 00:53:55 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:55.749677 | orchestrator | 2025-03-27 00:53:55 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:55.750836 | orchestrator | 2025-03-27 00:53:55 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:55.763132 | orchestrator | 2025-03-27 00:53:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:53:58.814063 | orchestrator | 2025-03-27 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:53:58.814190 | orchestrator | 2025-03-27 00:53:58 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:53:58.815182 | orchestrator | 2025-03-27 00:53:58 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:53:58.818185 | orchestrator | 2025-03-27 00:53:58 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:53:58.821878 | orchestrator | 2025-03-27 00:53:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:01.871901 | orchestrator | 2025-03-27 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:01.872027 | orchestrator | 2025-03-27 00:54:01 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:04.926676 | orchestrator | 2025-03-27 00:54:01 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:04.926787 | orchestrator | 2025-03-27 00:54:01 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:04.926804 | orchestrator | 2025-03-27 00:54:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:04.926820 | orchestrator | 2025-03-27 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:04.926849 | orchestrator | 2025-03-27 00:54:04 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:04.927624 | orchestrator | 2025-03-27 00:54:04 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:04.929210 | orchestrator | 2025-03-27 00:54:04 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:04.929965 | orchestrator | 2025-03-27 00:54:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:04.930230 | orchestrator | 2025-03-27 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:07.981857 | orchestrator | 2025-03-27 00:54:07 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:07.984437 | orchestrator | 2025-03-27 00:54:07 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:07.986199 | orchestrator | 2025-03-27 00:54:07 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:07.986933 | orchestrator | 2025-03-27 00:54:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:11.040058 | orchestrator | 2025-03-27 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:11.040184 | orchestrator | 2025-03-27 00:54:11 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:11.040582 | orchestrator | 2025-03-27 00:54:11 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:11.041500 | orchestrator | 2025-03-27 00:54:11 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:11.042431 | orchestrator | 2025-03-27 00:54:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:11.042518 | orchestrator | 2025-03-27 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:14.114215 | orchestrator | 2025-03-27 00:54:14 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:14.116712 | orchestrator | 2025-03-27 00:54:14 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:14.118280 | orchestrator | 2025-03-27 00:54:14 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:14.121383 | orchestrator | 2025-03-27 00:54:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:17.174291 | orchestrator | 2025-03-27 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:17.174452 | orchestrator | 2025-03-27 00:54:17 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:17.177109 | orchestrator | 2025-03-27 00:54:17 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:17.178005 | orchestrator | 2025-03-27 00:54:17 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:17.180540 | orchestrator | 2025-03-27 00:54:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:20.215181 | orchestrator | 2025-03-27 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:20.215292 | orchestrator | 2025-03-27 00:54:20 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:20.217543 | orchestrator | 2025-03-27 00:54:20 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:20.217724 | orchestrator | 2025-03-27 00:54:20 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:20.217746 | orchestrator | 2025-03-27 00:54:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:23.280053 | orchestrator | 2025-03-27 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:23.280180 | orchestrator | 2025-03-27 00:54:23 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:23.280939 | orchestrator | 2025-03-27 00:54:23 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:23.282330 | orchestrator | 2025-03-27 00:54:23 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:23.284020 | orchestrator | 2025-03-27 00:54:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:26.339230 | orchestrator | 2025-03-27 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:26.339420 | orchestrator | 2025-03-27 00:54:26 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:26.341608 | orchestrator | 2025-03-27 00:54:26 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:26.343567 | orchestrator | 2025-03-27 00:54:26 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:26.345517 | orchestrator | 2025-03-27 00:54:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:26.345638 | orchestrator | 2025-03-27 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:29.393725 | orchestrator | 2025-03-27 00:54:29 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:29.396261 | orchestrator | 2025-03-27 00:54:29 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:29.396304 | orchestrator | 2025-03-27 00:54:29 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:29.398251 | orchestrator | 2025-03-27 00:54:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:32.439286 | orchestrator | 2025-03-27 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:32.439605 | orchestrator | 2025-03-27 00:54:32 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:32.439707 | orchestrator | 2025-03-27 00:54:32 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:32.441695 | orchestrator | 2025-03-27 00:54:32 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:32.442410 | orchestrator | 2025-03-27 00:54:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:32.442536 | orchestrator | 2025-03-27 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:35.492966 | orchestrator | 2025-03-27 00:54:35 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:35.493623 | orchestrator | 2025-03-27 00:54:35 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:35.496258 | orchestrator | 2025-03-27 00:54:35 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:35.496966 | orchestrator | 2025-03-27 00:54:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:35.497084 | orchestrator | 2025-03-27 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:38.542880 | orchestrator | 2025-03-27 00:54:38 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:38.543521 | orchestrator | 2025-03-27 00:54:38 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state STARTED 2025-03-27 00:54:38.544516 | orchestrator | 2025-03-27 00:54:38 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:38.545770 | orchestrator | 2025-03-27 00:54:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:41.605033 | orchestrator | 2025-03-27 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:41.605170 | orchestrator | 2025-03-27 00:54:41 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:41.608538 | orchestrator | 2025-03-27 00:54:41.608603 | orchestrator | 2025-03-27 00:54:41.608619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:54:41.608634 | orchestrator | 2025-03-27 00:54:41.608648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:54:41.608662 | orchestrator | Thursday 27 March 2025 00:52:08 +0000 (0:00:00.493) 0:00:00.493 ******** 2025-03-27 00:54:41.608676 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:54:41.608691 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:54:41.608705 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:54:41.608719 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.608732 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.608746 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.608759 | orchestrator | 2025-03-27 00:54:41.608773 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:54:41.608787 | orchestrator | Thursday 27 March 2025 00:52:09 +0000 (0:00:01.139) 0:00:01.632 ******** 2025-03-27 00:54:41.608801 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-27 00:54:41.608815 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-27 00:54:41.608829 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-27 00:54:41.608842 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-27 00:54:41.608856 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-27 00:54:41.608870 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-27 00:54:41.608883 | orchestrator | 2025-03-27 00:54:41.608897 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-27 00:54:41.609023 | orchestrator | 2025-03-27 00:54:41.609038 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-27 00:54:41.609051 | orchestrator | Thursday 27 March 2025 00:52:11 +0000 (0:00:01.461) 0:00:03.094 ******** 2025-03-27 00:54:41.609067 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:54:41.609082 | orchestrator | 2025-03-27 00:54:41.609096 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-27 00:54:41.609112 | orchestrator | Thursday 27 March 2025 00:52:13 +0000 (0:00:01.735) 0:00:04.829 ******** 2025-03-27 00:54:41.609128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609283 | orchestrator | 2025-03-27 00:54:41.609299 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-27 00:54:41.609315 | orchestrator | Thursday 27 March 2025 00:52:14 +0000 (0:00:01.542) 0:00:06.372 ******** 2025-03-27 00:54:41.609361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609467 | orchestrator | 2025-03-27 00:54:41.609481 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-27 00:54:41.609495 | orchestrator | Thursday 27 March 2025 00:52:16 +0000 (0:00:02.225) 0:00:08.598 ******** 2025-03-27 00:54:41.609509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609610 | orchestrator | 2025-03-27 00:54:41.609624 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-27 00:54:41.609638 | orchestrator | Thursday 27 March 2025 00:52:18 +0000 (0:00:01.357) 0:00:09.955 ******** 2025-03-27 00:54:41.609652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609755 | orchestrator | 2025-03-27 00:54:41.609769 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-27 00:54:41.609783 | orchestrator | Thursday 27 March 2025 00:52:20 +0000 (0:00:02.172) 0:00:12.128 ******** 2025-03-27 00:54:41.609797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.609887 | orchestrator | 2025-03-27 00:54:41.609900 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-27 00:54:41.609914 | orchestrator | Thursday 27 March 2025 00:52:22 +0000 (0:00:02.094) 0:00:14.222 ******** 2025-03-27 00:54:41.609928 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:54:41.609944 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:54:41.609958 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:54:41.609971 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.609985 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.609999 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.610060 | orchestrator | 2025-03-27 00:54:41.610080 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-27 00:54:41.610094 | orchestrator | Thursday 27 March 2025 00:52:26 +0000 (0:00:03.508) 0:00:17.732 ******** 2025-03-27 00:54:41.610108 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-27 00:54:41.610122 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-27 00:54:41.610136 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-27 00:54:41.610156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-27 00:54:41.610171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-27 00:54:41.610185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-27 00:54:41.610199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610212 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610226 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610274 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-27 00:54:41.610288 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610386 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-27 00:54:41.610414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610429 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-27 00:54:41.610499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610513 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610527 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610554 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-27 00:54:41.610581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610595 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610608 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610622 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-27 00:54:41.610663 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-27 00:54:41.610677 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-27 00:54:41.610691 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-27 00:54:41.610705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-27 00:54:41.610724 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-27 00:54:41.610738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-27 00:54:41.610759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-27 00:54:41.610774 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-27 00:54:41.610787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-27 00:54:41.610801 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-27 00:54:41.610815 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-27 00:54:41.610829 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-27 00:54:41.610843 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-27 00:54:41.610862 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-27 00:54:41.610876 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-27 00:54:41.610890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-27 00:54:41.610904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-27 00:54:41.610918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-27 00:54:41.610932 | orchestrator | 2025-03-27 00:54:41.610945 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.610959 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:21.308) 0:00:39.041 ******** 2025-03-27 00:54:41.610973 | orchestrator | 2025-03-27 00:54:41.610987 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.611001 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:00.136) 0:00:39.177 ******** 2025-03-27 00:54:41.611014 | orchestrator | 2025-03-27 00:54:41.611028 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.611041 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:00.472) 0:00:39.649 ******** 2025-03-27 00:54:41.611055 | orchestrator | 2025-03-27 00:54:41.611068 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.611082 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:00.070) 0:00:39.719 ******** 2025-03-27 00:54:41.611095 | orchestrator | 2025-03-27 00:54:41.611109 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.611123 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.070) 0:00:39.790 ******** 2025-03-27 00:54:41.611137 | orchestrator | 2025-03-27 00:54:41.611150 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-27 00:54:41.611164 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.065) 0:00:39.855 ******** 2025-03-27 00:54:41.611178 | orchestrator | 2025-03-27 00:54:41.611191 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-27 00:54:41.611205 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.301) 0:00:40.156 ******** 2025-03-27 00:54:41.611218 | orchestrator | ok: [testbed-node-3] 2025-03-27 00:54:41.611232 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.611246 | orchestrator | ok: [testbed-node-5] 2025-03-27 00:54:41.611260 | orchestrator | ok: [testbed-node-4] 2025-03-27 00:54:41.611274 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.611287 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.611307 | orchestrator | 2025-03-27 00:54:41.611321 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-27 00:54:41.611334 | orchestrator | Thursday 27 March 2025 00:52:50 +0000 (0:00:02.038) 0:00:42.194 ******** 2025-03-27 00:54:41.611403 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.611419 | orchestrator | changed: [testbed-node-5] 2025-03-27 00:54:41.611433 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.611446 | orchestrator | changed: [testbed-node-3] 2025-03-27 00:54:41.611460 | orchestrator | changed: [testbed-node-4] 2025-03-27 00:54:41.611473 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.611487 | orchestrator | 2025-03-27 00:54:41.611501 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-27 00:54:41.611515 | orchestrator | 2025-03-27 00:54:41.611528 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-27 00:54:41.611542 | orchestrator | Thursday 27 March 2025 00:53:09 +0000 (0:00:18.980) 0:01:01.175 ******** 2025-03-27 00:54:41.611556 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:54:41.611570 | orchestrator | 2025-03-27 00:54:41.611583 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-27 00:54:41.611597 | orchestrator | Thursday 27 March 2025 00:53:10 +0000 (0:00:00.662) 0:01:01.837 ******** 2025-03-27 00:54:41.611611 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:54:41.611625 | orchestrator | 2025-03-27 00:54:41.611645 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-27 00:54:41.611665 | orchestrator | Thursday 27 March 2025 00:53:10 +0000 (0:00:00.855) 0:01:02.693 ******** 2025-03-27 00:54:41.611680 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.611694 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.611708 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.611722 | orchestrator | 2025-03-27 00:54:41.611736 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-27 00:54:41.611750 | orchestrator | Thursday 27 March 2025 00:53:12 +0000 (0:00:01.280) 0:01:03.973 ******** 2025-03-27 00:54:41.611763 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.611777 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.611791 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.611804 | orchestrator | 2025-03-27 00:54:41.611818 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-27 00:54:41.611832 | orchestrator | Thursday 27 March 2025 00:53:12 +0000 (0:00:00.361) 0:01:04.334 ******** 2025-03-27 00:54:41.611846 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.611860 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.611873 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.611887 | orchestrator | 2025-03-27 00:54:41.611901 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-27 00:54:41.611915 | orchestrator | Thursday 27 March 2025 00:53:13 +0000 (0:00:00.586) 0:01:04.921 ******** 2025-03-27 00:54:41.611928 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.611941 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.611954 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.611966 | orchestrator | 2025-03-27 00:54:41.611979 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-27 00:54:41.611991 | orchestrator | Thursday 27 March 2025 00:53:14 +0000 (0:00:01.037) 0:01:05.958 ******** 2025-03-27 00:54:41.612003 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.612016 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.612028 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.612040 | orchestrator | 2025-03-27 00:54:41.612052 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-27 00:54:41.612065 | orchestrator | Thursday 27 March 2025 00:53:14 +0000 (0:00:00.575) 0:01:06.534 ******** 2025-03-27 00:54:41.612077 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612100 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612119 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612131 | orchestrator | 2025-03-27 00:54:41.612143 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-27 00:54:41.612156 | orchestrator | Thursday 27 March 2025 00:53:15 +0000 (0:00:00.725) 0:01:07.259 ******** 2025-03-27 00:54:41.612168 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612180 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612192 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612204 | orchestrator | 2025-03-27 00:54:41.612217 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-27 00:54:41.612229 | orchestrator | Thursday 27 March 2025 00:53:16 +0000 (0:00:00.657) 0:01:07.916 ******** 2025-03-27 00:54:41.612241 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612254 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612265 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612278 | orchestrator | 2025-03-27 00:54:41.612290 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-27 00:54:41.612302 | orchestrator | Thursday 27 March 2025 00:53:16 +0000 (0:00:00.469) 0:01:08.386 ******** 2025-03-27 00:54:41.612314 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612327 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612339 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612367 | orchestrator | 2025-03-27 00:54:41.612380 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-27 00:54:41.612392 | orchestrator | Thursday 27 March 2025 00:53:16 +0000 (0:00:00.322) 0:01:08.708 ******** 2025-03-27 00:54:41.612404 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612417 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612429 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612441 | orchestrator | 2025-03-27 00:54:41.612453 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-27 00:54:41.612465 | orchestrator | Thursday 27 March 2025 00:53:17 +0000 (0:00:00.525) 0:01:09.234 ******** 2025-03-27 00:54:41.612478 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612490 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612502 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612514 | orchestrator | 2025-03-27 00:54:41.612526 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-27 00:54:41.612576 | orchestrator | Thursday 27 March 2025 00:53:18 +0000 (0:00:00.671) 0:01:09.906 ******** 2025-03-27 00:54:41.612590 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612603 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612615 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612627 | orchestrator | 2025-03-27 00:54:41.612640 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-27 00:54:41.612652 | orchestrator | Thursday 27 March 2025 00:53:18 +0000 (0:00:00.650) 0:01:10.556 ******** 2025-03-27 00:54:41.612726 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612740 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612753 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612765 | orchestrator | 2025-03-27 00:54:41.612777 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-27 00:54:41.612794 | orchestrator | Thursday 27 March 2025 00:53:19 +0000 (0:00:00.322) 0:01:10.878 ******** 2025-03-27 00:54:41.612807 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612819 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612832 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612844 | orchestrator | 2025-03-27 00:54:41.612856 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-27 00:54:41.612869 | orchestrator | Thursday 27 March 2025 00:53:19 +0000 (0:00:00.632) 0:01:11.511 ******** 2025-03-27 00:54:41.612881 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612893 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612905 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.612924 | orchestrator | 2025-03-27 00:54:41.612943 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-27 00:54:41.612956 | orchestrator | Thursday 27 March 2025 00:53:20 +0000 (0:00:00.595) 0:01:12.106 ******** 2025-03-27 00:54:41.612968 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.612980 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.612992 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613004 | orchestrator | 2025-03-27 00:54:41.613017 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-27 00:54:41.613033 | orchestrator | Thursday 27 March 2025 00:53:21 +0000 (0:00:00.645) 0:01:12.752 ******** 2025-03-27 00:54:41.613046 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613058 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613070 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613082 | orchestrator | 2025-03-27 00:54:41.613095 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-27 00:54:41.613107 | orchestrator | Thursday 27 March 2025 00:53:21 +0000 (0:00:00.365) 0:01:13.117 ******** 2025-03-27 00:54:41.613119 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:54:41.613131 | orchestrator | 2025-03-27 00:54:41.613144 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-27 00:54:41.613156 | orchestrator | Thursday 27 March 2025 00:53:22 +0000 (0:00:01.087) 0:01:14.204 ******** 2025-03-27 00:54:41.613168 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.613180 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.613193 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.613205 | orchestrator | 2025-03-27 00:54:41.613217 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-27 00:54:41.613229 | orchestrator | Thursday 27 March 2025 00:53:23 +0000 (0:00:00.763) 0:01:14.967 ******** 2025-03-27 00:54:41.613241 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.613253 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.613266 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.613278 | orchestrator | 2025-03-27 00:54:41.613290 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-27 00:54:41.613302 | orchestrator | Thursday 27 March 2025 00:53:24 +0000 (0:00:00.802) 0:01:15.770 ******** 2025-03-27 00:54:41.613314 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613327 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613355 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613368 | orchestrator | 2025-03-27 00:54:41.613381 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-27 00:54:41.613393 | orchestrator | Thursday 27 March 2025 00:53:24 +0000 (0:00:00.812) 0:01:16.582 ******** 2025-03-27 00:54:41.613405 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613417 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613429 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613441 | orchestrator | 2025-03-27 00:54:41.613453 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-27 00:54:41.613466 | orchestrator | Thursday 27 March 2025 00:53:25 +0000 (0:00:01.022) 0:01:17.605 ******** 2025-03-27 00:54:41.613478 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613490 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613502 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613514 | orchestrator | 2025-03-27 00:54:41.613526 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-27 00:54:41.613538 | orchestrator | Thursday 27 March 2025 00:53:26 +0000 (0:00:00.529) 0:01:18.134 ******** 2025-03-27 00:54:41.613551 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613568 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613580 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613592 | orchestrator | 2025-03-27 00:54:41.613605 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-27 00:54:41.613623 | orchestrator | Thursday 27 March 2025 00:53:27 +0000 (0:00:01.179) 0:01:19.314 ******** 2025-03-27 00:54:41.613635 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613647 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613659 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613671 | orchestrator | 2025-03-27 00:54:41.613684 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-27 00:54:41.613696 | orchestrator | Thursday 27 March 2025 00:53:28 +0000 (0:00:00.809) 0:01:20.123 ******** 2025-03-27 00:54:41.613708 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.613720 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.613732 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.613744 | orchestrator | 2025-03-27 00:54:41.613756 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-27 00:54:41.613768 | orchestrator | Thursday 27 March 2025 00:53:28 +0000 (0:00:00.603) 0:01:20.726 ******** 2025-03-27 00:54:41.613781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-03-27 00:54:41 | INFO  | Task a3dd2c15-8497-4e76-ae9f-e6f3d56c468c is in state SUCCESS 2025-03-27 00:54:41.613828 | orchestrator | olla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613928 | orchestrator | 2025-03-27 00:54:41.613940 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-27 00:54:41.613952 | orchestrator | Thursday 27 March 2025 00:53:30 +0000 (0:00:01.563) 0:01:22.289 ******** 2025-03-27 00:54:41.613965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.613977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614142 | orchestrator | 2025-03-27 00:54:41.614154 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-27 00:54:41.614167 | orchestrator | Thursday 27 March 2025 00:53:34 +0000 (0:00:04.409) 0:01:26.698 ******** 2025-03-27 00:54:41.614179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.614312 | orchestrator | 2025-03-27 00:54:41.614325 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.614337 | orchestrator | Thursday 27 March 2025 00:53:37 +0000 (0:00:02.730) 0:01:29.428 ******** 2025-03-27 00:54:41.614365 | orchestrator | 2025-03-27 00:54:41.614378 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.614391 | orchestrator | Thursday 27 March 2025 00:53:37 +0000 (0:00:00.068) 0:01:29.497 ******** 2025-03-27 00:54:41.614403 | orchestrator | 2025-03-27 00:54:41.614416 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.614428 | orchestrator | Thursday 27 March 2025 00:53:37 +0000 (0:00:00.091) 0:01:29.589 ******** 2025-03-27 00:54:41.614440 | orchestrator | 2025-03-27 00:54:41.614452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-27 00:54:41.614468 | orchestrator | Thursday 27 March 2025 00:53:38 +0000 (0:00:00.220) 0:01:29.810 ******** 2025-03-27 00:54:41.614480 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.614493 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.614505 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.614517 | orchestrator | 2025-03-27 00:54:41.614529 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-27 00:54:41.614542 | orchestrator | Thursday 27 March 2025 00:53:40 +0000 (0:00:02.509) 0:01:32.319 ******** 2025-03-27 00:54:41.614554 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.614566 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.614578 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.614590 | orchestrator | 2025-03-27 00:54:41.614602 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-27 00:54:41.614615 | orchestrator | Thursday 27 March 2025 00:53:47 +0000 (0:00:06.899) 0:01:39.219 ******** 2025-03-27 00:54:41.614627 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.614639 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.614651 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.614663 | orchestrator | 2025-03-27 00:54:41.614675 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-27 00:54:41.614687 | orchestrator | Thursday 27 March 2025 00:53:54 +0000 (0:00:07.127) 0:01:46.347 ******** 2025-03-27 00:54:41.614700 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.614712 | orchestrator | 2025-03-27 00:54:41.614724 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-27 00:54:41.614736 | orchestrator | Thursday 27 March 2025 00:53:54 +0000 (0:00:00.139) 0:01:46.486 ******** 2025-03-27 00:54:41.614748 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.614766 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.614779 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.614801 | orchestrator | 2025-03-27 00:54:41.614814 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-27 00:54:41.614826 | orchestrator | Thursday 27 March 2025 00:53:55 +0000 (0:00:01.135) 0:01:47.621 ******** 2025-03-27 00:54:41.614838 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.614851 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.614863 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.614875 | orchestrator | 2025-03-27 00:54:41.614887 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-27 00:54:41.614899 | orchestrator | Thursday 27 March 2025 00:53:56 +0000 (0:00:00.613) 0:01:48.235 ******** 2025-03-27 00:54:41.614911 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.614923 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.614936 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.614948 | orchestrator | 2025-03-27 00:54:41.614960 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-27 00:54:41.614972 | orchestrator | Thursday 27 March 2025 00:53:57 +0000 (0:00:00.941) 0:01:49.176 ******** 2025-03-27 00:54:41.614984 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.614996 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.615008 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.615020 | orchestrator | 2025-03-27 00:54:41.615032 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-27 00:54:41.615044 | orchestrator | Thursday 27 March 2025 00:53:58 +0000 (0:00:00.631) 0:01:49.807 ******** 2025-03-27 00:54:41.615056 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.615068 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.615081 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.615093 | orchestrator | 2025-03-27 00:54:41.615105 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-27 00:54:41.615117 | orchestrator | Thursday 27 March 2025 00:53:59 +0000 (0:00:01.151) 0:01:50.959 ******** 2025-03-27 00:54:41.615129 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.615141 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.615153 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.615165 | orchestrator | 2025-03-27 00:54:41.615177 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-27 00:54:41.615189 | orchestrator | Thursday 27 March 2025 00:54:00 +0000 (0:00:00.844) 0:01:51.804 ******** 2025-03-27 00:54:41.615201 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.615214 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.615226 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.615238 | orchestrator | 2025-03-27 00:54:41.615250 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-27 00:54:41.615262 | orchestrator | Thursday 27 March 2025 00:54:00 +0000 (0:00:00.544) 0:01:52.348 ******** 2025-03-27 00:54:41.615274 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615299 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615318 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615365 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615404 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615416 | orchestrator | 2025-03-27 00:54:41.615429 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-27 00:54:41.615441 | orchestrator | Thursday 27 March 2025 00:54:02 +0000 (0:00:01.778) 0:01:54.126 ******** 2025-03-27 00:54:41.615454 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615466 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615478 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615514 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615583 | orchestrator | 2025-03-27 00:54:41.615595 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-27 00:54:41.615608 | orchestrator | Thursday 27 March 2025 00:54:08 +0000 (0:00:05.914) 0:02:00.040 ******** 2025-03-27 00:54:41.615620 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615645 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615663 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615704 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615723 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 00:54:41.615748 | orchestrator | 2025-03-27 00:54:41.615761 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.615773 | orchestrator | Thursday 27 March 2025 00:54:11 +0000 (0:00:03.389) 0:02:03.430 ******** 2025-03-27 00:54:41.615785 | orchestrator | 2025-03-27 00:54:41.615798 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.615810 | orchestrator | Thursday 27 March 2025 00:54:11 +0000 (0:00:00.217) 0:02:03.647 ******** 2025-03-27 00:54:41.615822 | orchestrator | 2025-03-27 00:54:41.615835 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-27 00:54:41.615847 | orchestrator | Thursday 27 March 2025 00:54:11 +0000 (0:00:00.061) 0:02:03.708 ******** 2025-03-27 00:54:41.615859 | orchestrator | 2025-03-27 00:54:41.615871 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-27 00:54:41.615884 | orchestrator | Thursday 27 March 2025 00:54:12 +0000 (0:00:00.055) 0:02:03.764 ******** 2025-03-27 00:54:41.615896 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.615908 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.615920 | orchestrator | 2025-03-27 00:54:41.615932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-27 00:54:41.615950 | orchestrator | Thursday 27 March 2025 00:54:19 +0000 (0:00:07.141) 0:02:10.906 ******** 2025-03-27 00:54:41.615962 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.615974 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.615987 | orchestrator | 2025-03-27 00:54:41.615998 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-27 00:54:41.616011 | orchestrator | Thursday 27 March 2025 00:54:25 +0000 (0:00:06.819) 0:02:17.726 ******** 2025-03-27 00:54:41.616023 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:54:41.616035 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:54:41.616047 | orchestrator | 2025-03-27 00:54:41.616059 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-27 00:54:41.616071 | orchestrator | Thursday 27 March 2025 00:54:32 +0000 (0:00:06.591) 0:02:24.318 ******** 2025-03-27 00:54:41.616083 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:54:41.616095 | orchestrator | 2025-03-27 00:54:41.616107 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-27 00:54:41.616119 | orchestrator | Thursday 27 March 2025 00:54:33 +0000 (0:00:00.609) 0:02:24.927 ******** 2025-03-27 00:54:41.616131 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.616144 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.616156 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.616168 | orchestrator | 2025-03-27 00:54:41.616180 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-27 00:54:41.616192 | orchestrator | Thursday 27 March 2025 00:54:34 +0000 (0:00:00.928) 0:02:25.856 ******** 2025-03-27 00:54:41.616205 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.616217 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.616229 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.616249 | orchestrator | 2025-03-27 00:54:41.616261 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-27 00:54:41.616274 | orchestrator | Thursday 27 March 2025 00:54:34 +0000 (0:00:00.699) 0:02:26.555 ******** 2025-03-27 00:54:41.616287 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.616300 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.616313 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.616325 | orchestrator | 2025-03-27 00:54:41.616337 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-27 00:54:41.616398 | orchestrator | Thursday 27 March 2025 00:54:36 +0000 (0:00:01.204) 0:02:27.759 ******** 2025-03-27 00:54:41.616411 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:54:41.616423 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:54:41.616436 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:54:41.616448 | orchestrator | 2025-03-27 00:54:41.616460 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-27 00:54:41.616472 | orchestrator | Thursday 27 March 2025 00:54:37 +0000 (0:00:00.982) 0:02:28.742 ******** 2025-03-27 00:54:41.616485 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.616497 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.616509 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.616521 | orchestrator | 2025-03-27 00:54:41.616534 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-27 00:54:41.616546 | orchestrator | Thursday 27 March 2025 00:54:37 +0000 (0:00:00.876) 0:02:29.619 ******** 2025-03-27 00:54:41.616558 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:54:41.616570 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:54:41.616583 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:54:41.616594 | orchestrator | 2025-03-27 00:54:41.616607 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:54:41.616619 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-27 00:54:41.616638 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-27 00:54:41.616785 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-27 00:54:41.616805 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:54:41.616824 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:54:41.616835 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 00:54:41.616845 | orchestrator | 2025-03-27 00:54:41.616856 | orchestrator | 2025-03-27 00:54:41.616866 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:54:41.616876 | orchestrator | Thursday 27 March 2025 00:54:39 +0000 (0:00:01.304) 0:02:30.923 ******** 2025-03-27 00:54:41.616887 | orchestrator | =============================================================================== 2025-03-27 00:54:41.616897 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.31s 2025-03-27 00:54:41.616907 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.98s 2025-03-27 00:54:41.616917 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.72s 2025-03-27 00:54:41.616927 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.72s 2025-03-27 00:54:41.616937 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.65s 2025-03-27 00:54:41.616947 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.91s 2025-03-27 00:54:41.616961 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.41s 2025-03-27 00:54:41.616971 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.51s 2025-03-27 00:54:41.616982 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.39s 2025-03-27 00:54:41.616992 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.73s 2025-03-27 00:54:41.617002 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.23s 2025-03-27 00:54:41.617012 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.17s 2025-03-27 00:54:41.617022 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.09s 2025-03-27 00:54:41.617032 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.04s 2025-03-27 00:54:41.617043 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.78s 2025-03-27 00:54:41.617053 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.74s 2025-03-27 00:54:41.617063 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2025-03-27 00:54:41.617073 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.54s 2025-03-27 00:54:41.617083 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.46s 2025-03-27 00:54:41.617093 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.36s 2025-03-27 00:54:41.617107 | orchestrator | 2025-03-27 00:54:41 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:44.666111 | orchestrator | 2025-03-27 00:54:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:44.666866 | orchestrator | 2025-03-27 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:44.666921 | orchestrator | 2025-03-27 00:54:44 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:44.671427 | orchestrator | 2025-03-27 00:54:44 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:47.734500 | orchestrator | 2025-03-27 00:54:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:47.734634 | orchestrator | 2025-03-27 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:47.734669 | orchestrator | 2025-03-27 00:54:47 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:47.734893 | orchestrator | 2025-03-27 00:54:47 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:47.734920 | orchestrator | 2025-03-27 00:54:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:47.734940 | orchestrator | 2025-03-27 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:50.794291 | orchestrator | 2025-03-27 00:54:50 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:50.796336 | orchestrator | 2025-03-27 00:54:50 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:50.798802 | orchestrator | 2025-03-27 00:54:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:53.849765 | orchestrator | 2025-03-27 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:53.849856 | orchestrator | 2025-03-27 00:54:53 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:53.851920 | orchestrator | 2025-03-27 00:54:53 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:53.853526 | orchestrator | 2025-03-27 00:54:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:56.894527 | orchestrator | 2025-03-27 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:56.894654 | orchestrator | 2025-03-27 00:54:56 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:56.895136 | orchestrator | 2025-03-27 00:54:56 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:56.895168 | orchestrator | 2025-03-27 00:54:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:59.941517 | orchestrator | 2025-03-27 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:54:59.941644 | orchestrator | 2025-03-27 00:54:59 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:54:59.942645 | orchestrator | 2025-03-27 00:54:59 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:54:59.943658 | orchestrator | 2025-03-27 00:54:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:54:59.944091 | orchestrator | 2025-03-27 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:02.998130 | orchestrator | 2025-03-27 00:55:02 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:02.999113 | orchestrator | 2025-03-27 00:55:02 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:03.003092 | orchestrator | 2025-03-27 00:55:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:06.062763 | orchestrator | 2025-03-27 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:06.062886 | orchestrator | 2025-03-27 00:55:06 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:06.065419 | orchestrator | 2025-03-27 00:55:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:06.068302 | orchestrator | 2025-03-27 00:55:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:09.126138 | orchestrator | 2025-03-27 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:09.126272 | orchestrator | 2025-03-27 00:55:09 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:09.128317 | orchestrator | 2025-03-27 00:55:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:09.131179 | orchestrator | 2025-03-27 00:55:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:12.188982 | orchestrator | 2025-03-27 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:12.189105 | orchestrator | 2025-03-27 00:55:12 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:12.190147 | orchestrator | 2025-03-27 00:55:12 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:12.192641 | orchestrator | 2025-03-27 00:55:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:15.255058 | orchestrator | 2025-03-27 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:15.255188 | orchestrator | 2025-03-27 00:55:15 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:18.305026 | orchestrator | 2025-03-27 00:55:15 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:18.305137 | orchestrator | 2025-03-27 00:55:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:18.305157 | orchestrator | 2025-03-27 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:18.305190 | orchestrator | 2025-03-27 00:55:18 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:18.307015 | orchestrator | 2025-03-27 00:55:18 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:18.308926 | orchestrator | 2025-03-27 00:55:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:18.309423 | orchestrator | 2025-03-27 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:21.357003 | orchestrator | 2025-03-27 00:55:21 | INFO  | Task d6475cdd-d505-4dac-aeca-ac3130379d3b is in state STARTED 2025-03-27 00:55:21.358452 | orchestrator | 2025-03-27 00:55:21 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:21.360293 | orchestrator | 2025-03-27 00:55:21 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:21.363711 | orchestrator | 2025-03-27 00:55:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:21.364178 | orchestrator | 2025-03-27 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:24.424337 | orchestrator | 2025-03-27 00:55:24 | INFO  | Task d6475cdd-d505-4dac-aeca-ac3130379d3b is in state STARTED 2025-03-27 00:55:24.426435 | orchestrator | 2025-03-27 00:55:24 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:24.427463 | orchestrator | 2025-03-27 00:55:24 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:24.428655 | orchestrator | 2025-03-27 00:55:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:27.466161 | orchestrator | 2025-03-27 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:27.466297 | orchestrator | 2025-03-27 00:55:27 | INFO  | Task d6475cdd-d505-4dac-aeca-ac3130379d3b is in state STARTED 2025-03-27 00:55:27.466753 | orchestrator | 2025-03-27 00:55:27 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:27.470478 | orchestrator | 2025-03-27 00:55:27 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:27.471412 | orchestrator | 2025-03-27 00:55:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:27.471650 | orchestrator | 2025-03-27 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:30.521667 | orchestrator | 2025-03-27 00:55:30 | INFO  | Task d6475cdd-d505-4dac-aeca-ac3130379d3b is in state STARTED 2025-03-27 00:55:30.522419 | orchestrator | 2025-03-27 00:55:30 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:30.522467 | orchestrator | 2025-03-27 00:55:30 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:30.523378 | orchestrator | 2025-03-27 00:55:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:33.573639 | orchestrator | 2025-03-27 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:33.573779 | orchestrator | 2025-03-27 00:55:33 | INFO  | Task d6475cdd-d505-4dac-aeca-ac3130379d3b is in state SUCCESS 2025-03-27 00:55:33.574350 | orchestrator | 2025-03-27 00:55:33 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:33.574417 | orchestrator | 2025-03-27 00:55:33 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:33.574974 | orchestrator | 2025-03-27 00:55:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:36.638985 | orchestrator | 2025-03-27 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:36.639130 | orchestrator | 2025-03-27 00:55:36 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:36.641526 | orchestrator | 2025-03-27 00:55:36 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:36.643891 | orchestrator | 2025-03-27 00:55:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:39.709091 | orchestrator | 2025-03-27 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:39.709222 | orchestrator | 2025-03-27 00:55:39 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:39.710969 | orchestrator | 2025-03-27 00:55:39 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:39.713788 | orchestrator | 2025-03-27 00:55:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:42.761194 | orchestrator | 2025-03-27 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:42.761322 | orchestrator | 2025-03-27 00:55:42 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:42.764319 | orchestrator | 2025-03-27 00:55:42 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:42.765034 | orchestrator | 2025-03-27 00:55:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:42.765120 | orchestrator | 2025-03-27 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:45.811292 | orchestrator | 2025-03-27 00:55:45 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:45.812103 | orchestrator | 2025-03-27 00:55:45 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:45.812145 | orchestrator | 2025-03-27 00:55:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:48.865835 | orchestrator | 2025-03-27 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:48.865963 | orchestrator | 2025-03-27 00:55:48 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:48.867753 | orchestrator | 2025-03-27 00:55:48 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:48.869040 | orchestrator | 2025-03-27 00:55:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:51.939687 | orchestrator | 2025-03-27 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:51.939826 | orchestrator | 2025-03-27 00:55:51 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:51.940526 | orchestrator | 2025-03-27 00:55:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:51.943802 | orchestrator | 2025-03-27 00:55:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:54.996472 | orchestrator | 2025-03-27 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:54.996645 | orchestrator | 2025-03-27 00:55:54 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:54.997130 | orchestrator | 2025-03-27 00:55:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:54.997231 | orchestrator | 2025-03-27 00:55:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:55:54.998958 | orchestrator | 2025-03-27 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:55:58.048815 | orchestrator | 2025-03-27 00:55:58 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:55:58.050623 | orchestrator | 2025-03-27 00:55:58 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:55:58.055786 | orchestrator | 2025-03-27 00:55:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:01.095351 | orchestrator | 2025-03-27 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:01.095500 | orchestrator | 2025-03-27 00:56:01 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:01.096276 | orchestrator | 2025-03-27 00:56:01 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:01.097223 | orchestrator | 2025-03-27 00:56:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:04.141922 | orchestrator | 2025-03-27 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:04.142124 | orchestrator | 2025-03-27 00:56:04 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:07.197498 | orchestrator | 2025-03-27 00:56:04 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:07.197605 | orchestrator | 2025-03-27 00:56:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:07.197622 | orchestrator | 2025-03-27 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:07.197651 | orchestrator | 2025-03-27 00:56:07 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:07.201428 | orchestrator | 2025-03-27 00:56:07 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:07.202858 | orchestrator | 2025-03-27 00:56:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:07.203064 | orchestrator | 2025-03-27 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:10.248093 | orchestrator | 2025-03-27 00:56:10 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:10.255682 | orchestrator | 2025-03-27 00:56:10 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:10.258290 | orchestrator | 2025-03-27 00:56:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:10.259742 | orchestrator | 2025-03-27 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:13.305186 | orchestrator | 2025-03-27 00:56:13 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:13.310982 | orchestrator | 2025-03-27 00:56:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:13.311594 | orchestrator | 2025-03-27 00:56:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:13.311739 | orchestrator | 2025-03-27 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:16.373346 | orchestrator | 2025-03-27 00:56:16 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:16.374460 | orchestrator | 2025-03-27 00:56:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:16.376429 | orchestrator | 2025-03-27 00:56:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:19.426987 | orchestrator | 2025-03-27 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:19.427091 | orchestrator | 2025-03-27 00:56:19 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:19.428281 | orchestrator | 2025-03-27 00:56:19 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:19.431066 | orchestrator | 2025-03-27 00:56:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:22.487172 | orchestrator | 2025-03-27 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:22.487291 | orchestrator | 2025-03-27 00:56:22 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:22.489309 | orchestrator | 2025-03-27 00:56:22 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:22.490069 | orchestrator | 2025-03-27 00:56:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:22.490179 | orchestrator | 2025-03-27 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:25.541114 | orchestrator | 2025-03-27 00:56:25 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:25.541806 | orchestrator | 2025-03-27 00:56:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:25.543043 | orchestrator | 2025-03-27 00:56:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:25.545163 | orchestrator | 2025-03-27 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:28.591674 | orchestrator | 2025-03-27 00:56:28 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:28.592403 | orchestrator | 2025-03-27 00:56:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:28.597540 | orchestrator | 2025-03-27 00:56:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:31.651670 | orchestrator | 2025-03-27 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:31.651799 | orchestrator | 2025-03-27 00:56:31 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:31.653605 | orchestrator | 2025-03-27 00:56:31 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:31.656037 | orchestrator | 2025-03-27 00:56:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:31.656581 | orchestrator | 2025-03-27 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:34.712770 | orchestrator | 2025-03-27 00:56:34 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:34.713601 | orchestrator | 2025-03-27 00:56:34 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:34.715817 | orchestrator | 2025-03-27 00:56:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:37.770175 | orchestrator | 2025-03-27 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:37.770320 | orchestrator | 2025-03-27 00:56:37 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:37.772325 | orchestrator | 2025-03-27 00:56:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:37.774714 | orchestrator | 2025-03-27 00:56:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:40.841319 | orchestrator | 2025-03-27 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:40.841509 | orchestrator | 2025-03-27 00:56:40 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:40.842366 | orchestrator | 2025-03-27 00:56:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:40.844329 | orchestrator | 2025-03-27 00:56:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:43.903563 | orchestrator | 2025-03-27 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:43.903691 | orchestrator | 2025-03-27 00:56:43 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:43.904988 | orchestrator | 2025-03-27 00:56:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:43.905815 | orchestrator | 2025-03-27 00:56:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:43.906122 | orchestrator | 2025-03-27 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:46.959000 | orchestrator | 2025-03-27 00:56:46 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:46.960522 | orchestrator | 2025-03-27 00:56:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:46.961926 | orchestrator | 2025-03-27 00:56:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:46.962718 | orchestrator | 2025-03-27 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:50.016847 | orchestrator | 2025-03-27 00:56:50 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:50.018368 | orchestrator | 2025-03-27 00:56:50 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:50.018472 | orchestrator | 2025-03-27 00:56:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:53.082077 | orchestrator | 2025-03-27 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:53.082185 | orchestrator | 2025-03-27 00:56:53 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:53.083793 | orchestrator | 2025-03-27 00:56:53 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:53.084869 | orchestrator | 2025-03-27 00:56:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:53.085037 | orchestrator | 2025-03-27 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:56.143603 | orchestrator | 2025-03-27 00:56:56 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:56.148270 | orchestrator | 2025-03-27 00:56:56 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:56.149976 | orchestrator | 2025-03-27 00:56:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:56:56.150107 | orchestrator | 2025-03-27 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:56:59.200103 | orchestrator | 2025-03-27 00:56:59 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:56:59.202122 | orchestrator | 2025-03-27 00:56:59 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:56:59.203890 | orchestrator | 2025-03-27 00:56:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:02.270254 | orchestrator | 2025-03-27 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:02.270435 | orchestrator | 2025-03-27 00:57:02 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:02.273779 | orchestrator | 2025-03-27 00:57:02 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:02.278135 | orchestrator | 2025-03-27 00:57:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:05.334191 | orchestrator | 2025-03-27 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:05.334317 | orchestrator | 2025-03-27 00:57:05 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:05.335315 | orchestrator | 2025-03-27 00:57:05 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:05.336805 | orchestrator | 2025-03-27 00:57:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:08.394120 | orchestrator | 2025-03-27 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:08.394258 | orchestrator | 2025-03-27 00:57:08 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:08.395039 | orchestrator | 2025-03-27 00:57:08 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:08.396932 | orchestrator | 2025-03-27 00:57:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:11.458579 | orchestrator | 2025-03-27 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:11.458706 | orchestrator | 2025-03-27 00:57:11 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:11.460549 | orchestrator | 2025-03-27 00:57:11 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:11.461946 | orchestrator | 2025-03-27 00:57:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:14.511682 | orchestrator | 2025-03-27 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:14.511774 | orchestrator | 2025-03-27 00:57:14 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:14.512104 | orchestrator | 2025-03-27 00:57:14 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:14.515680 | orchestrator | 2025-03-27 00:57:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:14.516421 | orchestrator | 2025-03-27 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:17.579522 | orchestrator | 2025-03-27 00:57:17 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:17.580269 | orchestrator | 2025-03-27 00:57:17 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:17.587795 | orchestrator | 2025-03-27 00:57:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:20.637637 | orchestrator | 2025-03-27 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:20.637721 | orchestrator | 2025-03-27 00:57:20 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:20.638765 | orchestrator | 2025-03-27 00:57:20 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:20.641086 | orchestrator | 2025-03-27 00:57:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:23.691307 | orchestrator | 2025-03-27 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:23.691527 | orchestrator | 2025-03-27 00:57:23 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:23.691914 | orchestrator | 2025-03-27 00:57:23 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:23.692719 | orchestrator | 2025-03-27 00:57:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:23.693176 | orchestrator | 2025-03-27 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:26.751832 | orchestrator | 2025-03-27 00:57:26 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:26.753516 | orchestrator | 2025-03-27 00:57:26 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:26.755354 | orchestrator | 2025-03-27 00:57:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:29.796642 | orchestrator | 2025-03-27 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:29.796777 | orchestrator | 2025-03-27 00:57:29 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:29.797244 | orchestrator | 2025-03-27 00:57:29 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:29.797849 | orchestrator | 2025-03-27 00:57:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:32.849987 | orchestrator | 2025-03-27 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:32.850149 | orchestrator | 2025-03-27 00:57:32 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:32.853713 | orchestrator | 2025-03-27 00:57:32 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:32.855139 | orchestrator | 2025-03-27 00:57:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:35.911085 | orchestrator | 2025-03-27 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:35.911199 | orchestrator | 2025-03-27 00:57:35 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:35.911785 | orchestrator | 2025-03-27 00:57:35 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:35.915197 | orchestrator | 2025-03-27 00:57:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:38.973478 | orchestrator | 2025-03-27 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:38.973608 | orchestrator | 2025-03-27 00:57:38 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:38.976423 | orchestrator | 2025-03-27 00:57:38 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:38.978890 | orchestrator | 2025-03-27 00:57:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:38.980247 | orchestrator | 2025-03-27 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:42.027907 | orchestrator | 2025-03-27 00:57:42 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:42.029164 | orchestrator | 2025-03-27 00:57:42 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:42.030555 | orchestrator | 2025-03-27 00:57:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:45.070814 | orchestrator | 2025-03-27 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:45.070921 | orchestrator | 2025-03-27 00:57:45 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:45.073150 | orchestrator | 2025-03-27 00:57:45 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:48.126445 | orchestrator | 2025-03-27 00:57:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:48.126569 | orchestrator | 2025-03-27 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:48.126603 | orchestrator | 2025-03-27 00:57:48 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:48.126985 | orchestrator | 2025-03-27 00:57:48 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:48.127020 | orchestrator | 2025-03-27 00:57:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:51.192806 | orchestrator | 2025-03-27 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:51.192933 | orchestrator | 2025-03-27 00:57:51 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:51.196008 | orchestrator | 2025-03-27 00:57:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:51.196748 | orchestrator | 2025-03-27 00:57:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:54.234822 | orchestrator | 2025-03-27 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:54.234954 | orchestrator | 2025-03-27 00:57:54 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:54.235640 | orchestrator | 2025-03-27 00:57:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:54.237355 | orchestrator | 2025-03-27 00:57:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:54.237463 | orchestrator | 2025-03-27 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:57:57.293742 | orchestrator | 2025-03-27 00:57:57 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:57:57.296135 | orchestrator | 2025-03-27 00:57:57 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:57:57.297561 | orchestrator | 2025-03-27 00:57:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:57:57.297783 | orchestrator | 2025-03-27 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:00.348486 | orchestrator | 2025-03-27 00:58:00 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:00.348678 | orchestrator | 2025-03-27 00:58:00 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:00.349086 | orchestrator | 2025-03-27 00:58:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:03.401678 | orchestrator | 2025-03-27 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:03.401807 | orchestrator | 2025-03-27 00:58:03 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:03.404946 | orchestrator | 2025-03-27 00:58:03 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:03.405568 | orchestrator | 2025-03-27 00:58:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:03.405879 | orchestrator | 2025-03-27 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:06.473203 | orchestrator | 2025-03-27 00:58:06 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:06.474165 | orchestrator | 2025-03-27 00:58:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:06.475341 | orchestrator | 2025-03-27 00:58:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:09.522778 | orchestrator | 2025-03-27 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:09.522924 | orchestrator | 2025-03-27 00:58:09 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:09.523292 | orchestrator | 2025-03-27 00:58:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:09.525751 | orchestrator | 2025-03-27 00:58:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:12.574306 | orchestrator | 2025-03-27 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:12.574487 | orchestrator | 2025-03-27 00:58:12 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:12.575114 | orchestrator | 2025-03-27 00:58:12 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:12.575148 | orchestrator | 2025-03-27 00:58:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:15.620895 | orchestrator | 2025-03-27 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:15.621033 | orchestrator | 2025-03-27 00:58:15 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:15.621795 | orchestrator | 2025-03-27 00:58:15 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:15.623634 | orchestrator | 2025-03-27 00:58:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:15.623709 | orchestrator | 2025-03-27 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:18.682954 | orchestrator | 2025-03-27 00:58:18 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:18.683625 | orchestrator | 2025-03-27 00:58:18 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:18.683702 | orchestrator | 2025-03-27 00:58:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:21.737748 | orchestrator | 2025-03-27 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:21.737903 | orchestrator | 2025-03-27 00:58:21 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:21.738305 | orchestrator | 2025-03-27 00:58:21 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:21.738339 | orchestrator | 2025-03-27 00:58:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:24.791550 | orchestrator | 2025-03-27 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:24.791679 | orchestrator | 2025-03-27 00:58:24 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:24.792720 | orchestrator | 2025-03-27 00:58:24 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:24.793583 | orchestrator | 2025-03-27 00:58:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:24.793796 | orchestrator | 2025-03-27 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:27.848306 | orchestrator | 2025-03-27 00:58:27 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:27.849457 | orchestrator | 2025-03-27 00:58:27 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:27.850495 | orchestrator | 2025-03-27 00:58:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:30.912480 | orchestrator | 2025-03-27 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:30.912618 | orchestrator | 2025-03-27 00:58:30 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:30.914440 | orchestrator | 2025-03-27 00:58:30 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:30.914481 | orchestrator | 2025-03-27 00:58:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:30.914553 | orchestrator | 2025-03-27 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:33.966498 | orchestrator | 2025-03-27 00:58:33 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:33.968491 | orchestrator | 2025-03-27 00:58:33 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:33.971066 | orchestrator | 2025-03-27 00:58:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:33.971469 | orchestrator | 2025-03-27 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:37.030842 | orchestrator | 2025-03-27 00:58:37 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:37.032143 | orchestrator | 2025-03-27 00:58:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:37.033590 | orchestrator | 2025-03-27 00:58:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:40.081734 | orchestrator | 2025-03-27 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:40.081868 | orchestrator | 2025-03-27 00:58:40 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:40.083377 | orchestrator | 2025-03-27 00:58:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:40.083519 | orchestrator | 2025-03-27 00:58:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:40.084044 | orchestrator | 2025-03-27 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:43.126613 | orchestrator | 2025-03-27 00:58:43 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:43.126885 | orchestrator | 2025-03-27 00:58:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:43.128270 | orchestrator | 2025-03-27 00:58:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:46.189839 | orchestrator | 2025-03-27 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:46.189982 | orchestrator | 2025-03-27 00:58:46 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state STARTED 2025-03-27 00:58:46.190906 | orchestrator | 2025-03-27 00:58:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:46.192787 | orchestrator | 2025-03-27 00:58:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:49.245837 | orchestrator | 2025-03-27 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:49.245984 | orchestrator | 2025-03-27 00:58:49 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:58:49.256904 | orchestrator | 2025-03-27 00:58:49 | INFO  | Task b8fd4b1e-56e7-424a-b99c-e0c69844f2b7 is in state SUCCESS 2025-03-27 00:58:49.258599 | orchestrator | 2025-03-27 00:58:49.258673 | orchestrator | None 2025-03-27 00:58:49.258691 | orchestrator | 2025-03-27 00:58:49.258706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 00:58:49.258721 | orchestrator | 2025-03-27 00:58:49.258735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 00:58:49.258750 | orchestrator | Thursday 27 March 2025 00:50:36 +0000 (0:00:00.522) 0:00:00.522 ******** 2025-03-27 00:58:49.258864 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.258914 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.258931 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.258980 | orchestrator | 2025-03-27 00:58:49.258995 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 00:58:49.259010 | orchestrator | Thursday 27 March 2025 00:50:36 +0000 (0:00:00.461) 0:00:00.984 ******** 2025-03-27 00:58:49.259025 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-27 00:58:49.259039 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-27 00:58:49.259053 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-27 00:58:49.259093 | orchestrator | 2025-03-27 00:58:49.259108 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-27 00:58:49.259122 | orchestrator | 2025-03-27 00:58:49.259138 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-27 00:58:49.259155 | orchestrator | Thursday 27 March 2025 00:50:37 +0000 (0:00:00.467) 0:00:01.451 ******** 2025-03-27 00:58:49.259209 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.259226 | orchestrator | 2025-03-27 00:58:49.259242 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-27 00:58:49.259258 | orchestrator | Thursday 27 March 2025 00:50:38 +0000 (0:00:01.124) 0:00:02.576 ******** 2025-03-27 00:58:49.259273 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.259290 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.259305 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.259347 | orchestrator | 2025-03-27 00:58:49.259364 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-27 00:58:49.259478 | orchestrator | Thursday 27 March 2025 00:50:40 +0000 (0:00:01.732) 0:00:04.308 ******** 2025-03-27 00:58:49.259497 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.259513 | orchestrator | 2025-03-27 00:58:49.259527 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-27 00:58:49.259541 | orchestrator | Thursday 27 March 2025 00:50:41 +0000 (0:00:01.355) 0:00:05.664 ******** 2025-03-27 00:58:49.259577 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.259592 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.259606 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.259620 | orchestrator | 2025-03-27 00:58:49.259633 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-27 00:58:49.259647 | orchestrator | Thursday 27 March 2025 00:50:43 +0000 (0:00:01.516) 0:00:07.181 ******** 2025-03-27 00:58:49.259661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259780 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259794 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-27 00:58:49.259809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-27 00:58:49.259859 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-27 00:58:49.259911 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-27 00:58:49.259927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-27 00:58:49.259941 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-27 00:58:49.259955 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-27 00:58:49.259994 | orchestrator | 2025-03-27 00:58:49.260008 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-27 00:58:49.260022 | orchestrator | Thursday 27 March 2025 00:50:48 +0000 (0:00:05.358) 0:00:12.539 ******** 2025-03-27 00:58:49.260036 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-27 00:58:49.260056 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-27 00:58:49.260114 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-27 00:58:49.260130 | orchestrator | 2025-03-27 00:58:49.260144 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-27 00:58:49.260158 | orchestrator | Thursday 27 March 2025 00:50:49 +0000 (0:00:01.123) 0:00:13.662 ******** 2025-03-27 00:58:49.260172 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-27 00:58:49.260186 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-27 00:58:49.260200 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-27 00:58:49.260214 | orchestrator | 2025-03-27 00:58:49.260228 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-27 00:58:49.260242 | orchestrator | Thursday 27 March 2025 00:50:51 +0000 (0:00:01.669) 0:00:15.332 ******** 2025-03-27 00:58:49.260255 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-27 00:58:49.260269 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.260296 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-27 00:58:49.260311 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.260353 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-27 00:58:49.260420 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.260435 | orchestrator | 2025-03-27 00:58:49.260449 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-27 00:58:49.260463 | orchestrator | Thursday 27 March 2025 00:50:52 +0000 (0:00:00.945) 0:00:16.278 ******** 2025-03-27 00:58:49.260479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.260667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.260717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.260735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.260794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.260809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.260824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.260838 | orchestrator | 2025-03-27 00:58:49.260852 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-27 00:58:49.260866 | orchestrator | Thursday 27 March 2025 00:50:54 +0000 (0:00:02.495) 0:00:18.774 ******** 2025-03-27 00:58:49.260880 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.260905 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.260941 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.260966 | orchestrator | 2025-03-27 00:58:49.261000 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-27 00:58:49.261028 | orchestrator | Thursday 27 March 2025 00:50:57 +0000 (0:00:02.395) 0:00:21.170 ******** 2025-03-27 00:58:49.261055 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-03-27 00:58:49.261083 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-03-27 00:58:49.261112 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-03-27 00:58:49.261138 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-03-27 00:58:49.261165 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-03-27 00:58:49.261270 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-03-27 00:58:49.261298 | orchestrator | 2025-03-27 00:58:49.261327 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-27 00:58:49.261356 | orchestrator | Thursday 27 March 2025 00:51:00 +0000 (0:00:03.756) 0:00:24.926 ******** 2025-03-27 00:58:49.261616 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.261632 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.261645 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.261683 | orchestrator | 2025-03-27 00:58:49.261698 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-27 00:58:49.261711 | orchestrator | Thursday 27 March 2025 00:51:03 +0000 (0:00:02.345) 0:00:27.272 ******** 2025-03-27 00:58:49.261723 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.261736 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.261748 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.261760 | orchestrator | 2025-03-27 00:58:49.261773 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-27 00:58:49.261785 | orchestrator | Thursday 27 March 2025 00:51:08 +0000 (0:00:05.572) 0:00:32.844 ******** 2025-03-27 00:58:49.261798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.261813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.261826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.261839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.261876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.261890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.261903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.261916 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.261929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.261942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.261954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.261973 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.261986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.262005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.262064 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.262108 | orchestrator | 2025-03-27 00:58:49.262148 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-27 00:58:49.262163 | orchestrator | Thursday 27 March 2025 00:51:12 +0000 (0:00:04.060) 0:00:36.904 ******** 2025-03-27 00:58:49.262217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.262313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.262326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.262339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.262352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.262372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.262394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.266164 | orchestrator | 2025-03-27 00:58:49.266213 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-27 00:58:49.266390 | orchestrator | Thursday 27 March 2025 00:51:20 +0000 (0:00:07.152) 0:00:44.057 ******** 2025-03-27 00:58:49.266432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.266523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.266534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.266545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.266556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.266577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.266587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.266598 | orchestrator | 2025-03-27 00:58:49.266608 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-27 00:58:49.266618 | orchestrator | Thursday 27 March 2025 00:51:24 +0000 (0:00:04.119) 0:00:48.176 ******** 2025-03-27 00:58:49.266633 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-27 00:58:49.266645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-27 00:58:49.266655 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-27 00:58:49.266665 | orchestrator | 2025-03-27 00:58:49.266675 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-27 00:58:49.266685 | orchestrator | Thursday 27 March 2025 00:51:27 +0000 (0:00:03.232) 0:00:51.409 ******** 2025-03-27 00:58:49.266695 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-27 00:58:49.266705 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-27 00:58:49.266715 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-27 00:58:49.266725 | orchestrator | 2025-03-27 00:58:49.266735 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-27 00:58:49.266745 | orchestrator | Thursday 27 March 2025 00:51:33 +0000 (0:00:05.672) 0:00:57.082 ******** 2025-03-27 00:58:49.266755 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.266765 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.266775 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.266785 | orchestrator | 2025-03-27 00:58:49.266794 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-27 00:58:49.266804 | orchestrator | Thursday 27 March 2025 00:51:34 +0000 (0:00:01.472) 0:00:58.554 ******** 2025-03-27 00:58:49.266814 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-27 00:58:49.266883 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-27 00:58:49.266897 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-27 00:58:49.266916 | orchestrator | 2025-03-27 00:58:49.266926 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-27 00:58:49.266936 | orchestrator | Thursday 27 March 2025 00:51:37 +0000 (0:00:03.265) 0:01:01.819 ******** 2025-03-27 00:58:49.266946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-27 00:58:49.266957 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-27 00:58:49.266967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-27 00:58:49.266977 | orchestrator | 2025-03-27 00:58:49.266987 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-27 00:58:49.266996 | orchestrator | Thursday 27 March 2025 00:51:41 +0000 (0:00:03.762) 0:01:05.582 ******** 2025-03-27 00:58:49.267007 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-27 00:58:49.267042 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-27 00:58:49.267054 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-27 00:58:49.267122 | orchestrator | 2025-03-27 00:58:49.267133 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-27 00:58:49.267143 | orchestrator | Thursday 27 March 2025 00:51:43 +0000 (0:00:02.298) 0:01:07.880 ******** 2025-03-27 00:58:49.267153 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-27 00:58:49.267163 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-27 00:58:49.267174 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-27 00:58:49.267183 | orchestrator | 2025-03-27 00:58:49.267193 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-27 00:58:49.267203 | orchestrator | Thursday 27 March 2025 00:51:46 +0000 (0:00:02.314) 0:01:10.194 ******** 2025-03-27 00:58:49.267213 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.267223 | orchestrator | 2025-03-27 00:58:49.267233 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-27 00:58:49.267243 | orchestrator | Thursday 27 March 2025 00:51:46 +0000 (0:00:00.829) 0:01:11.024 ******** 2025-03-27 00:58:49.267258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.267341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.267390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.267437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.267494 | orchestrator | 2025-03-27 00:58:49.267506 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-27 00:58:49.267517 | orchestrator | Thursday 27 March 2025 00:51:50 +0000 (0:00:03.620) 0:01:14.644 ******** 2025-03-27 00:58:49.267528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267559 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.267574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267622 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.267633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267664 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.267674 | orchestrator | 2025-03-27 00:58:49.267684 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-27 00:58:49.267694 | orchestrator | Thursday 27 March 2025 00:51:51 +0000 (0:00:00.746) 0:01:15.390 ******** 2025-03-27 00:58:49.267704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267800 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.267810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267842 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.267852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-27 00:58:49.267867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-27 00:58:49.267878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-27 00:58:49.267893 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.267903 | orchestrator | 2025-03-27 00:58:49.267913 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-27 00:58:49.267928 | orchestrator | Thursday 27 March 2025 00:51:52 +0000 (0:00:01.037) 0:01:16.428 ******** 2025-03-27 00:58:49.267938 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-27 00:58:49.267949 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-27 00:58:49.267959 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-27 00:58:49.267969 | orchestrator | 2025-03-27 00:58:49.267979 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-27 00:58:49.267989 | orchestrator | Thursday 27 March 2025 00:51:54 +0000 (0:00:02.180) 0:01:18.608 ******** 2025-03-27 00:58:49.267999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-27 00:58:49.268009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-27 00:58:49.268027 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-27 00:58:49.268058 | orchestrator | 2025-03-27 00:58:49.268069 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-27 00:58:49.268079 | orchestrator | Thursday 27 March 2025 00:51:58 +0000 (0:00:03.764) 0:01:22.373 ******** 2025-03-27 00:58:49.268089 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 00:58:49.268099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 00:58:49.268109 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 00:58:49.268120 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 00:58:49.268135 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.268152 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 00:58:49.268164 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.268174 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 00:58:49.268184 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.268194 | orchestrator | 2025-03-27 00:58:49.268203 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-27 00:58:49.268213 | orchestrator | Thursday 27 March 2025 00:52:00 +0000 (0:00:02.022) 0:01:24.396 ******** 2025-03-27 00:58:49.268224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-27 00:58:49.268305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.268315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.268331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.268347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.268358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-27 00:58:49.268368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de', '__omit_place_holder__8eec041787bdae09076d4d33d200a620c2d130de'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-27 00:58:49.268379 | orchestrator | 2025-03-27 00:58:49.268389 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-27 00:58:49.268399 | orchestrator | Thursday 27 March 2025 00:52:03 +0000 (0:00:03.521) 0:01:27.917 ******** 2025-03-27 00:58:49.268424 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.268434 | orchestrator | 2025-03-27 00:58:49.268445 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-27 00:58:49.268455 | orchestrator | Thursday 27 March 2025 00:52:05 +0000 (0:00:01.283) 0:01:29.201 ******** 2025-03-27 00:58:49.268465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-27 00:58:49.268481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-27 00:58:49.268493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-27 00:58:49.268685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268717 | orchestrator | 2025-03-27 00:58:49.268727 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-27 00:58:49.268737 | orchestrator | Thursday 27 March 2025 00:52:11 +0000 (0:00:06.560) 0:01:35.761 ******** 2025-03-27 00:58:49.268747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-27 00:58:49.268762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268808 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.268818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-27 00:58:49.268829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268866 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.268887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-27 00:58:49.268904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.268915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.268935 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.268950 | orchestrator | 2025-03-27 00:58:49.268961 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-27 00:58:49.268971 | orchestrator | Thursday 27 March 2025 00:52:12 +0000 (0:00:00.974) 0:01:36.735 ******** 2025-03-27 00:58:49.268981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.268993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.269003 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.269014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.269024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.269035 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.269046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.269056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-27 00:58:49.269066 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.269076 | orchestrator | 2025-03-27 00:58:49.269086 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-27 00:58:49.269096 | orchestrator | Thursday 27 March 2025 00:52:14 +0000 (0:00:01.745) 0:01:38.481 ******** 2025-03-27 00:58:49.269106 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.269116 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.269126 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.269135 | orchestrator | 2025-03-27 00:58:49.269145 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-27 00:58:49.269155 | orchestrator | Thursday 27 March 2025 00:52:15 +0000 (0:00:01.550) 0:01:40.031 ******** 2025-03-27 00:58:49.269165 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.269175 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.269185 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.269195 | orchestrator | 2025-03-27 00:58:49.269205 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-27 00:58:49.269215 | orchestrator | Thursday 27 March 2025 00:52:18 +0000 (0:00:02.563) 0:01:42.594 ******** 2025-03-27 00:58:49.269225 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.269235 | orchestrator | 2025-03-27 00:58:49.269245 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-27 00:58:49.269255 | orchestrator | Thursday 27 March 2025 00:52:19 +0000 (0:00:00.901) 0:01:43.495 ******** 2025-03-27 00:58:49.269278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.269298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.269331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.269381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269403 | orchestrator | 2025-03-27 00:58:49.269436 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-27 00:58:49.269446 | orchestrator | Thursday 27 March 2025 00:52:24 +0000 (0:00:05.333) 0:01:48.829 ******** 2025-03-27 00:58:49.269457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.269477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269505 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.269523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.269534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269555 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.269570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.269586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.269614 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.269624 | orchestrator | 2025-03-27 00:58:49.269634 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-27 00:58:49.269644 | orchestrator | Thursday 27 March 2025 00:52:25 +0000 (0:00:01.058) 0:01:49.887 ******** 2025-03-27 00:58:49.269654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269675 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.269685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269710 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.269721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-27 00:58:49.269741 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.269751 | orchestrator | 2025-03-27 00:58:49.269761 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-27 00:58:49.269771 | orchestrator | Thursday 27 March 2025 00:52:27 +0000 (0:00:02.060) 0:01:51.948 ******** 2025-03-27 00:58:49.269780 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.269796 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.269806 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.269816 | orchestrator | 2025-03-27 00:58:49.269826 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-27 00:58:49.269836 | orchestrator | Thursday 27 March 2025 00:52:29 +0000 (0:00:01.756) 0:01:53.704 ******** 2025-03-27 00:58:49.269845 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.269855 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.269865 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.269875 | orchestrator | 2025-03-27 00:58:49.269885 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-27 00:58:49.269895 | orchestrator | Thursday 27 March 2025 00:52:32 +0000 (0:00:02.406) 0:01:56.110 ******** 2025-03-27 00:58:49.269905 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.269915 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.269925 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.269935 | orchestrator | 2025-03-27 00:58:49.269949 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-27 00:58:49.269959 | orchestrator | Thursday 27 March 2025 00:52:32 +0000 (0:00:00.339) 0:01:56.450 ******** 2025-03-27 00:58:49.269969 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.269979 | orchestrator | 2025-03-27 00:58:49.269989 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-27 00:58:49.269999 | orchestrator | Thursday 27 March 2025 00:52:33 +0000 (0:00:01.022) 0:01:57.473 ******** 2025-03-27 00:58:49.270009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-27 00:58:49.270053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-27 00:58:49.270064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-27 00:58:49.270080 | orchestrator | 2025-03-27 00:58:49.270091 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-27 00:58:49.270100 | orchestrator | Thursday 27 March 2025 00:52:36 +0000 (0:00:03.291) 0:02:00.764 ******** 2025-03-27 00:58:49.270118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-27 00:58:49.270129 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.270145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-27 00:58:49.270156 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-27 00:58:49.270177 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.270187 | orchestrator | 2025-03-27 00:58:49.270197 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-27 00:58:49.270207 | orchestrator | Thursday 27 March 2025 00:52:38 +0000 (0:00:01.937) 0:02:02.701 ******** 2025-03-27 00:58:49.270217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270244 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270275 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.270285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-27 00:58:49.270311 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.270321 | orchestrator | 2025-03-27 00:58:49.270331 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-27 00:58:49.270341 | orchestrator | Thursday 27 March 2025 00:52:40 +0000 (0:00:02.302) 0:02:05.004 ******** 2025-03-27 00:58:49.270350 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270360 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.270370 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.270380 | orchestrator | 2025-03-27 00:58:49.270390 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-27 00:58:49.270400 | orchestrator | Thursday 27 March 2025 00:52:41 +0000 (0:00:00.855) 0:02:05.860 ******** 2025-03-27 00:58:49.270423 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270434 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.270444 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.270454 | orchestrator | 2025-03-27 00:58:49.270464 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-27 00:58:49.270474 | orchestrator | Thursday 27 March 2025 00:52:43 +0000 (0:00:01.422) 0:02:07.282 ******** 2025-03-27 00:58:49.270484 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.270494 | orchestrator | 2025-03-27 00:58:49.270504 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-27 00:58:49.270514 | orchestrator | Thursday 27 March 2025 00:52:44 +0000 (0:00:01.019) 0:02:08.302 ******** 2025-03-27 00:58:49.270524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.270540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.270598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.270617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270705 | orchestrator | 2025-03-27 00:58:49.270715 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-27 00:58:49.270729 | orchestrator | Thursday 27 March 2025 00:52:49 +0000 (0:00:05.109) 0:02:13.411 ******** 2025-03-27 00:58:49.270740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.270750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270792 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.270819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270857 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.270867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.270889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.270921 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.270931 | orchestrator | 2025-03-27 00:58:49.270941 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-27 00:58:49.270951 | orchestrator | Thursday 27 March 2025 00:52:50 +0000 (0:00:01.167) 0:02:14.578 ******** 2025-03-27 00:58:49.270962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.270976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.270987 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.270997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.271007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.271023 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.271034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.271044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-27 00:58:49.271054 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.271064 | orchestrator | 2025-03-27 00:58:49.271074 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-27 00:58:49.271084 | orchestrator | Thursday 27 March 2025 00:52:52 +0000 (0:00:02.083) 0:02:16.662 ******** 2025-03-27 00:58:49.271094 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.271109 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.271127 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.271144 | orchestrator | 2025-03-27 00:58:49.271162 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-27 00:58:49.271180 | orchestrator | Thursday 27 March 2025 00:52:54 +0000 (0:00:01.860) 0:02:18.523 ******** 2025-03-27 00:58:49.271200 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.271220 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.271240 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.271261 | orchestrator | 2025-03-27 00:58:49.271281 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-27 00:58:49.271301 | orchestrator | Thursday 27 March 2025 00:52:56 +0000 (0:00:02.162) 0:02:20.686 ******** 2025-03-27 00:58:49.271321 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.271340 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.271366 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.271387 | orchestrator | 2025-03-27 00:58:49.271422 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-27 00:58:49.271443 | orchestrator | Thursday 27 March 2025 00:52:56 +0000 (0:00:00.274) 0:02:20.960 ******** 2025-03-27 00:58:49.271461 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.271478 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.271497 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.271513 | orchestrator | 2025-03-27 00:58:49.271530 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-27 00:58:49.271547 | orchestrator | Thursday 27 March 2025 00:52:57 +0000 (0:00:00.390) 0:02:21.351 ******** 2025-03-27 00:58:49.271564 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.271581 | orchestrator | 2025-03-27 00:58:49.271599 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-27 00:58:49.271609 | orchestrator | Thursday 27 March 2025 00:52:58 +0000 (0:00:00.999) 0:02:22.350 ******** 2025-03-27 00:58:49.271621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 00:58:49.271658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.271682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 00:58:49.271765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 00:58:49.271776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.271786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.271807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.271938 | orchestrator | 2025-03-27 00:58:49.271953 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-27 00:58:49.271963 | orchestrator | Thursday 27 March 2025 00:53:02 +0000 (0:00:04.559) 0:02:26.910 ******** 2025-03-27 00:58:49.271974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 00:58:49.271984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.272001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272063 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.272074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 00:58:49.272090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.272101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272162 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.272205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 00:58:49.272218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 00:58:49.272234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.272299 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.272309 | orchestrator | 2025-03-27 00:58:49.272319 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-27 00:58:49.272329 | orchestrator | Thursday 27 March 2025 00:53:04 +0000 (0:00:01.189) 0:02:28.099 ******** 2025-03-27 00:58:49.272339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272366 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.272376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272396 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.272456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-27 00:58:49.272478 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.272488 | orchestrator | 2025-03-27 00:58:49.272498 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-27 00:58:49.272508 | orchestrator | Thursday 27 March 2025 00:53:05 +0000 (0:00:01.604) 0:02:29.703 ******** 2025-03-27 00:58:49.272518 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.272528 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.272538 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.272548 | orchestrator | 2025-03-27 00:58:49.272558 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-27 00:58:49.272568 | orchestrator | Thursday 27 March 2025 00:53:07 +0000 (0:00:01.417) 0:02:31.121 ******** 2025-03-27 00:58:49.272578 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.272588 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.272598 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.272608 | orchestrator | 2025-03-27 00:58:49.272618 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-27 00:58:49.272628 | orchestrator | Thursday 27 March 2025 00:53:09 +0000 (0:00:02.203) 0:02:33.324 ******** 2025-03-27 00:58:49.272637 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.272647 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.272656 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.272665 | orchestrator | 2025-03-27 00:58:49.272674 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-27 00:58:49.272686 | orchestrator | Thursday 27 March 2025 00:53:09 +0000 (0:00:00.579) 0:02:33.903 ******** 2025-03-27 00:58:49.272695 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.272703 | orchestrator | 2025-03-27 00:58:49.272712 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-27 00:58:49.272720 | orchestrator | Thursday 27 March 2025 00:53:11 +0000 (0:00:01.219) 0:02:35.123 ******** 2025-03-27 00:58:49.272729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 00:58:49.272751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 00:58:49.272767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 00:58:49.272826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272835 | orchestrator | 2025-03-27 00:58:49.272844 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-27 00:58:49.272853 | orchestrator | Thursday 27 March 2025 00:53:17 +0000 (0:00:06.571) 0:02:41.694 ******** 2025-03-27 00:58:49.272867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 00:58:49.272886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272896 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.272910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 00:58:49.272926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272940 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.272949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 00:58:49.272969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.272988 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.272997 | orchestrator | 2025-03-27 00:58:49.273006 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-27 00:58:49.273018 | orchestrator | Thursday 27 March 2025 00:53:23 +0000 (0:00:05.451) 0:02:47.146 ******** 2025-03-27 00:58:49.273027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273045 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.273054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273076 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.273085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-27 00:58:49.273107 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.273116 | orchestrator | 2025-03-27 00:58:49.273124 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-27 00:58:49.273133 | orchestrator | Thursday 27 March 2025 00:53:29 +0000 (0:00:06.388) 0:02:53.534 ******** 2025-03-27 00:58:49.273141 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.273150 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.273158 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.273166 | orchestrator | 2025-03-27 00:58:49.273175 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-27 00:58:49.273183 | orchestrator | Thursday 27 March 2025 00:53:30 +0000 (0:00:01.464) 0:02:54.999 ******** 2025-03-27 00:58:49.273192 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.273200 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.273208 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.273217 | orchestrator | 2025-03-27 00:58:49.273225 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-27 00:58:49.273234 | orchestrator | Thursday 27 March 2025 00:53:33 +0000 (0:00:02.360) 0:02:57.359 ******** 2025-03-27 00:58:49.273242 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.273250 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.273259 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.273267 | orchestrator | 2025-03-27 00:58:49.273276 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-27 00:58:49.273284 | orchestrator | Thursday 27 March 2025 00:53:33 +0000 (0:00:00.518) 0:02:57.877 ******** 2025-03-27 00:58:49.273293 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.273301 | orchestrator | 2025-03-27 00:58:49.273310 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-27 00:58:49.273318 | orchestrator | Thursday 27 March 2025 00:53:35 +0000 (0:00:01.262) 0:02:59.140 ******** 2025-03-27 00:58:49.273327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 00:58:49.273336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 00:58:49.273353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 00:58:49.273362 | orchestrator | 2025-03-27 00:58:49.273371 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-27 00:58:49.273379 | orchestrator | Thursday 27 March 2025 00:53:38 +0000 (0:00:03.832) 0:03:02.972 ******** 2025-03-27 00:58:49.273388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 00:58:49.273397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 00:58:49.273418 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.273427 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.273436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 00:58:49.273445 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.273453 | orchestrator | 2025-03-27 00:58:49.273462 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-27 00:58:49.273470 | orchestrator | Thursday 27 March 2025 00:53:39 +0000 (0:00:00.425) 0:03:03.398 ******** 2025-03-27 00:58:49.273479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273507 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.273515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273532 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.273541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-27 00:58:49.273689 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.273703 | orchestrator | 2025-03-27 00:58:49.273712 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-27 00:58:49.273721 | orchestrator | Thursday 27 March 2025 00:53:40 +0000 (0:00:01.098) 0:03:04.496 ******** 2025-03-27 00:58:49.273729 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.273738 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.273746 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.273754 | orchestrator | 2025-03-27 00:58:49.273763 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-27 00:58:49.273771 | orchestrator | Thursday 27 March 2025 00:53:41 +0000 (0:00:01.254) 0:03:05.750 ******** 2025-03-27 00:58:49.273779 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.273788 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.273796 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.273804 | orchestrator | 2025-03-27 00:58:49.273813 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-27 00:58:49.273821 | orchestrator | Thursday 27 March 2025 00:53:44 +0000 (0:00:02.320) 0:03:08.071 ******** 2025-03-27 00:58:49.273829 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.273838 | orchestrator | 2025-03-27 00:58:49.273846 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-03-27 00:58:49.273854 | orchestrator | Thursday 27 March 2025 00:53:45 +0000 (0:00:01.302) 0:03:09.373 ******** 2025-03-27 00:58:49.273873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.273933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.273962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.273972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.273981 | orchestrator | 2025-03-27 00:58:49.273993 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-03-27 00:58:49.274002 | orchestrator | Thursday 27 March 2025 00:53:53 +0000 (0:00:07.876) 0:03:17.250 ******** 2025-03-27 00:58:49.274011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.274075 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.274091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.274122 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.274131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.274221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.274231 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.274240 | orchestrator | 2025-03-27 00:58:49.274248 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-03-27 00:58:49.274257 | orchestrator | Thursday 27 March 2025 00:53:54 +0000 (0:00:01.111) 0:03:18.361 ******** 2025-03-27 00:58:49.274265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274307 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.274320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274360 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.274373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-27 00:58:49.274433 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.274442 | orchestrator | 2025-03-27 00:58:49.274451 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-03-27 00:58:49.274461 | orchestrator | Thursday 27 March 2025 00:53:55 +0000 (0:00:01.589) 0:03:19.951 ******** 2025-03-27 00:58:49.274471 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.274480 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.274490 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.274499 | orchestrator | 2025-03-27 00:58:49.274509 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-03-27 00:58:49.274518 | orchestrator | Thursday 27 March 2025 00:53:57 +0000 (0:00:01.679) 0:03:21.630 ******** 2025-03-27 00:58:49.274528 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.274537 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.274547 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.274556 | orchestrator | 2025-03-27 00:58:49.274568 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-27 00:58:49.274578 | orchestrator | Thursday 27 March 2025 00:54:00 +0000 (0:00:02.454) 0:03:24.085 ******** 2025-03-27 00:58:49.274588 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.274597 | orchestrator | 2025-03-27 00:58:49.274606 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-27 00:58:49.274616 | orchestrator | Thursday 27 March 2025 00:54:01 +0000 (0:00:01.204) 0:03:25.289 ******** 2025-03-27 00:58:49.274632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 00:58:49.274648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 00:58:49.274663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 00:58:49.274677 | orchestrator | 2025-03-27 00:58:49.274686 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-27 00:58:49.274694 | orchestrator | Thursday 27 March 2025 00:54:07 +0000 (0:00:06.668) 0:03:31.958 ******** 2025-03-27 00:58:49.274703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 00:58:49.274712 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.274726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 00:58:49.274740 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.274749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 00:58:49.274758 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.274766 | orchestrator | 2025-03-27 00:58:49.274778 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-27 00:58:49.274786 | orchestrator | Thursday 27 March 2025 00:54:09 +0000 (0:00:01.386) 0:03:33.344 ******** 2025-03-27 00:58:49.274801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-27 00:58:49.274850 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.274863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-27 00:58:49.274918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-27 00:58:49.274944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-27 00:58:49.274952 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.274961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-27 00:58:49.274970 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.274978 | orchestrator | 2025-03-27 00:58:49.274987 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-27 00:58:49.274995 | orchestrator | Thursday 27 March 2025 00:54:10 +0000 (0:00:01.339) 0:03:34.683 ******** 2025-03-27 00:58:49.275003 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.275012 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.275020 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.275029 | orchestrator | 2025-03-27 00:58:49.275037 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-27 00:58:49.275045 | orchestrator | Thursday 27 March 2025 00:54:12 +0000 (0:00:01.554) 0:03:36.238 ******** 2025-03-27 00:58:49.275053 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.275062 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.275070 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.275078 | orchestrator | 2025-03-27 00:58:49.275087 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-27 00:58:49.275095 | orchestrator | Thursday 27 March 2025 00:54:14 +0000 (0:00:02.646) 0:03:38.884 ******** 2025-03-27 00:58:49.275103 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.275112 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.275120 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.275129 | orchestrator | 2025-03-27 00:58:49.275137 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-27 00:58:49.275145 | orchestrator | Thursday 27 March 2025 00:54:15 +0000 (0:00:00.504) 0:03:39.389 ******** 2025-03-27 00:58:49.275154 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.275162 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.275170 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.275178 | orchestrator | 2025-03-27 00:58:49.275187 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-27 00:58:49.275195 | orchestrator | Thursday 27 March 2025 00:54:15 +0000 (0:00:00.318) 0:03:39.707 ******** 2025-03-27 00:58:49.275204 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.275212 | orchestrator | 2025-03-27 00:58:49.275220 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-27 00:58:49.275229 | orchestrator | Thursday 27 March 2025 00:54:17 +0000 (0:00:01.355) 0:03:41.063 ******** 2025-03-27 00:58:49.275238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 00:58:49.275251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 00:58:49.275283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 00:58:49.275318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275336 | orchestrator | 2025-03-27 00:58:49.275345 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-27 00:58:49.275353 | orchestrator | Thursday 27 March 2025 00:54:21 +0000 (0:00:04.851) 0:03:45.914 ******** 2025-03-27 00:58:49.275362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 00:58:49.275371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275394 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.275445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 00:58:49.275457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275474 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.275483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 00:58:49.275498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 00:58:49.275507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 00:58:49.275515 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.275524 | orchestrator | 2025-03-27 00:58:49.275532 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-27 00:58:49.275541 | orchestrator | Thursday 27 March 2025 00:54:22 +0000 (0:00:01.028) 0:03:46.943 ******** 2025-03-27 00:58:49.275553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275571 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.275579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275597 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.275605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-27 00:58:49.275622 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.275631 | orchestrator | 2025-03-27 00:58:49.275639 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-27 00:58:49.275652 | orchestrator | Thursday 27 March 2025 00:54:23 +0000 (0:00:01.108) 0:03:48.051 ******** 2025-03-27 00:58:49.275660 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.275669 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.275677 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.275686 | orchestrator | 2025-03-27 00:58:49.275694 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-27 00:58:49.275702 | orchestrator | Thursday 27 March 2025 00:54:25 +0000 (0:00:01.465) 0:03:49.517 ******** 2025-03-27 00:58:49.275711 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.275719 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.275728 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.275736 | orchestrator | 2025-03-27 00:58:49.275744 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-27 00:58:49.275753 | orchestrator | Thursday 27 March 2025 00:54:27 +0000 (0:00:02.463) 0:03:51.981 ******** 2025-03-27 00:58:49.275761 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.275769 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.275778 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.275786 | orchestrator | 2025-03-27 00:58:49.275800 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-27 00:58:49.275815 | orchestrator | Thursday 27 March 2025 00:54:28 +0000 (0:00:00.334) 0:03:52.316 ******** 2025-03-27 00:58:49.275828 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.275840 | orchestrator | 2025-03-27 00:58:49.275853 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-27 00:58:49.275867 | orchestrator | Thursday 27 March 2025 00:54:29 +0000 (0:00:01.423) 0:03:53.739 ******** 2025-03-27 00:58:49.275882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 00:58:49.275903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.275919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 00:58:49.275942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.275957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 00:58:49.275971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.275985 | orchestrator | 2025-03-27 00:58:49.275999 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-27 00:58:49.276012 | orchestrator | Thursday 27 March 2025 00:54:34 +0000 (0:00:05.258) 0:03:58.997 ******** 2025-03-27 00:58:49.276033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 00:58:49.276054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276069 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.276084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 00:58:49.276099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276113 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.276131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 00:58:49.276252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276273 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.276282 | orchestrator | 2025-03-27 00:58:49.276290 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-27 00:58:49.276298 | orchestrator | Thursday 27 March 2025 00:54:36 +0000 (0:00:01.419) 0:04:00.417 ******** 2025-03-27 00:58:49.276306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276329 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.276337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276353 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.276361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-27 00:58:49.276377 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.276385 | orchestrator | 2025-03-27 00:58:49.276393 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-27 00:58:49.276401 | orchestrator | Thursday 27 March 2025 00:54:37 +0000 (0:00:01.246) 0:04:01.664 ******** 2025-03-27 00:58:49.276424 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.276433 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.276440 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.276448 | orchestrator | 2025-03-27 00:58:49.276456 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-27 00:58:49.276464 | orchestrator | Thursday 27 March 2025 00:54:39 +0000 (0:00:01.566) 0:04:03.230 ******** 2025-03-27 00:58:49.276471 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.276479 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.276488 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.276500 | orchestrator | 2025-03-27 00:58:49.276508 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-27 00:58:49.276516 | orchestrator | Thursday 27 March 2025 00:54:41 +0000 (0:00:02.389) 0:04:05.620 ******** 2025-03-27 00:58:49.276524 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.276532 | orchestrator | 2025-03-27 00:58:49.276540 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-27 00:58:49.276547 | orchestrator | Thursday 27 March 2025 00:54:42 +0000 (0:00:01.238) 0:04:06.859 ******** 2025-03-27 00:58:49.276604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-27 00:58:49.276622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-27 00:58:49.276640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-27 00:58:49.276740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276770 | orchestrator | 2025-03-27 00:58:49.276779 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-27 00:58:49.276787 | orchestrator | Thursday 27 March 2025 00:54:47 +0000 (0:00:04.813) 0:04:11.672 ******** 2025-03-27 00:58:49.276834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-27 00:58:49.276846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-27 00:58:49.276871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276880 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.276923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.276994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.277003 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.277012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-27 00:58:49.277020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.277028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.277043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.277060 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.277080 | orchestrator | 2025-03-27 00:58:49.277089 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-27 00:58:49.277097 | orchestrator | Thursday 27 March 2025 00:54:48 +0000 (0:00:00.952) 0:04:12.624 ******** 2025-03-27 00:58:49.277106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277169 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.277177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277193 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.277201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-27 00:58:49.277217 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.277225 | orchestrator | 2025-03-27 00:58:49.277233 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-27 00:58:49.277241 | orchestrator | Thursday 27 March 2025 00:54:50 +0000 (0:00:01.445) 0:04:14.069 ******** 2025-03-27 00:58:49.277248 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.277256 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.277264 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.277272 | orchestrator | 2025-03-27 00:58:49.277289 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-27 00:58:49.277297 | orchestrator | Thursday 27 March 2025 00:54:51 +0000 (0:00:01.626) 0:04:15.696 ******** 2025-03-27 00:58:49.277305 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.277313 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.277321 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.277329 | orchestrator | 2025-03-27 00:58:49.277337 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-27 00:58:49.277344 | orchestrator | Thursday 27 March 2025 00:54:54 +0000 (0:00:02.484) 0:04:18.181 ******** 2025-03-27 00:58:49.277352 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.277360 | orchestrator | 2025-03-27 00:58:49.277368 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-27 00:58:49.277376 | orchestrator | Thursday 27 March 2025 00:54:55 +0000 (0:00:01.577) 0:04:19.758 ******** 2025-03-27 00:58:49.277389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 00:58:49.277398 | orchestrator | 2025-03-27 00:58:49.277448 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-27 00:58:49.277458 | orchestrator | Thursday 27 March 2025 00:54:59 +0000 (0:00:03.706) 0:04:23.464 ******** 2025-03-27 00:58:49.277466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277550 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.277559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277586 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.277642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277664 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.277678 | orchestrator | 2025-03-27 00:58:49.277686 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-27 00:58:49.277695 | orchestrator | Thursday 27 March 2025 00:55:02 +0000 (0:00:03.342) 0:04:26.807 ******** 2025-03-27 00:58:49.277704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277765 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.277773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-27 00:58:49.277840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277851 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.277858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-27 00:58:49.277866 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.277873 | orchestrator | 2025-03-27 00:58:49.277881 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-27 00:58:49.277892 | orchestrator | Thursday 27 March 2025 00:55:06 +0000 (0:00:03.565) 0:04:30.373 ******** 2025-03-27 00:58:49.277900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.277908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.277915 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.277922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.277930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.277937 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.277993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.278043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-27 00:58:49.278054 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278067 | orchestrator | 2025-03-27 00:58:49.278074 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-27 00:58:49.278081 | orchestrator | Thursday 27 March 2025 00:55:09 +0000 (0:00:03.427) 0:04:33.800 ******** 2025-03-27 00:58:49.278088 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.278105 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.278112 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.278119 | orchestrator | 2025-03-27 00:58:49.278126 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-27 00:58:49.278133 | orchestrator | Thursday 27 March 2025 00:55:12 +0000 (0:00:02.324) 0:04:36.125 ******** 2025-03-27 00:58:49.278140 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278147 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278154 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278161 | orchestrator | 2025-03-27 00:58:49.278168 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-27 00:58:49.278175 | orchestrator | Thursday 27 March 2025 00:55:14 +0000 (0:00:02.082) 0:04:38.208 ******** 2025-03-27 00:58:49.278181 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278188 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278195 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278202 | orchestrator | 2025-03-27 00:58:49.278209 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-27 00:58:49.278216 | orchestrator | Thursday 27 March 2025 00:55:14 +0000 (0:00:00.339) 0:04:38.547 ******** 2025-03-27 00:58:49.278223 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.278230 | orchestrator | 2025-03-27 00:58:49.278237 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-27 00:58:49.278244 | orchestrator | Thursday 27 March 2025 00:55:16 +0000 (0:00:01.536) 0:04:40.084 ******** 2025-03-27 00:58:49.278251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-27 00:58:49.278259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-27 00:58:49.278316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-27 00:58:49.278332 | orchestrator | 2025-03-27 00:58:49.278340 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-27 00:58:49.278347 | orchestrator | Thursday 27 March 2025 00:55:17 +0000 (0:00:01.772) 0:04:41.856 ******** 2025-03-27 00:58:49.278354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-27 00:58:49.278361 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-27 00:58:49.278375 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-27 00:58:49.278397 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278418 | orchestrator | 2025-03-27 00:58:49.278425 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-27 00:58:49.278432 | orchestrator | Thursday 27 March 2025 00:55:18 +0000 (0:00:00.691) 0:04:42.547 ******** 2025-03-27 00:58:49.278439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-27 00:58:49.278447 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-27 00:58:49.278461 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-27 00:58:49.278480 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278487 | orchestrator | 2025-03-27 00:58:49.278531 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-27 00:58:49.278542 | orchestrator | Thursday 27 March 2025 00:55:19 +0000 (0:00:00.841) 0:04:43.388 ******** 2025-03-27 00:58:49.278549 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278556 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278563 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278571 | orchestrator | 2025-03-27 00:58:49.278578 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-27 00:58:49.278585 | orchestrator | Thursday 27 March 2025 00:55:20 +0000 (0:00:00.760) 0:04:44.149 ******** 2025-03-27 00:58:49.278592 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278599 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278606 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278613 | orchestrator | 2025-03-27 00:58:49.278620 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-27 00:58:49.278627 | orchestrator | Thursday 27 March 2025 00:55:21 +0000 (0:00:01.724) 0:04:45.873 ******** 2025-03-27 00:58:49.278634 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.278641 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.278648 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.278655 | orchestrator | 2025-03-27 00:58:49.278662 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-27 00:58:49.278669 | orchestrator | Thursday 27 March 2025 00:55:22 +0000 (0:00:00.335) 0:04:46.209 ******** 2025-03-27 00:58:49.278677 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.278683 | orchestrator | 2025-03-27 00:58:49.278690 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-27 00:58:49.278698 | orchestrator | Thursday 27 March 2025 00:55:23 +0000 (0:00:01.587) 0:04:47.797 ******** 2025-03-27 00:58:49.278705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 00:58:49.278713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.278787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.278810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.278818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.278878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.278894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.278902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.278930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.278974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.278986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 00:58:49.278994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.279089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.279188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.279208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.279245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.279290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 00:58:49.279309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.279384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.279491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.279565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.279602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.279611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279619 | orchestrator | 2025-03-27 00:58:49.279637 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-27 00:58:49.279645 | orchestrator | Thursday 27 March 2025 00:55:30 +0000 (0:00:06.427) 0:04:54.224 ******** 2025-03-27 00:58:49.279690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 00:58:49.279701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 00:58:49.279735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.279813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.279873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.279965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.279975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.279994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.280001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.280008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.280031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.280103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.280117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.280126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.280180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.280193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.280200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280222 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.280229 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.280237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 00:58:49.280293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 00:58:49.280339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.280391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.280419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.280441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.280461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 00:58:49.280472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 00:58:49.280587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 00:58:49.280600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.280613 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.280627 | orchestrator | 2025-03-27 00:58:49.280639 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-27 00:58:49.280654 | orchestrator | Thursday 27 March 2025 00:55:32 +0000 (0:00:02.104) 0:04:56.329 ******** 2025-03-27 00:58:49.280667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280691 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.280708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280732 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.280749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-27 00:58:49.280795 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.280808 | orchestrator | 2025-03-27 00:58:49.280820 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-27 00:58:49.280830 | orchestrator | Thursday 27 March 2025 00:55:34 +0000 (0:00:02.312) 0:04:58.641 ******** 2025-03-27 00:58:49.280838 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.280845 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.280876 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.280884 | orchestrator | 2025-03-27 00:58:49.280891 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-27 00:58:49.280898 | orchestrator | Thursday 27 March 2025 00:55:36 +0000 (0:00:01.612) 0:05:00.254 ******** 2025-03-27 00:58:49.280905 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.280912 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.280919 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.280925 | orchestrator | 2025-03-27 00:58:49.280932 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-27 00:58:49.280939 | orchestrator | Thursday 27 March 2025 00:55:38 +0000 (0:00:02.682) 0:05:02.936 ******** 2025-03-27 00:58:49.280946 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.280953 | orchestrator | 2025-03-27 00:58:49.280960 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-27 00:58:49.280967 | orchestrator | Thursday 27 March 2025 00:55:40 +0000 (0:00:01.685) 0:05:04.622 ******** 2025-03-27 00:58:49.280974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.280983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.280990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.281002 | orchestrator | 2025-03-27 00:58:49.281009 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-27 00:58:49.281016 | orchestrator | Thursday 27 March 2025 00:55:45 +0000 (0:00:04.471) 0:05:09.093 ******** 2025-03-27 00:58:49.281047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281056 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.281063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281071 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.281078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281085 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.281092 | orchestrator | 2025-03-27 00:58:49.281099 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-27 00:58:49.281105 | orchestrator | Thursday 27 March 2025 00:55:45 +0000 (0:00:00.557) 0:05:09.651 ******** 2025-03-27 00:58:49.281120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281134 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.281141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281156 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.281163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281177 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.281184 | orchestrator | 2025-03-27 00:58:49.281192 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-27 00:58:49.281215 | orchestrator | Thursday 27 March 2025 00:55:46 +0000 (0:00:01.280) 0:05:10.931 ******** 2025-03-27 00:58:49.281223 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.281232 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.281240 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.281247 | orchestrator | 2025-03-27 00:58:49.281255 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-27 00:58:49.281263 | orchestrator | Thursday 27 March 2025 00:55:48 +0000 (0:00:01.305) 0:05:12.237 ******** 2025-03-27 00:58:49.281271 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.281278 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.281286 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.281294 | orchestrator | 2025-03-27 00:58:49.281302 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-27 00:58:49.281310 | orchestrator | Thursday 27 March 2025 00:55:50 +0000 (0:00:02.517) 0:05:14.754 ******** 2025-03-27 00:58:49.281317 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.281325 | orchestrator | 2025-03-27 00:58:49.281333 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-27 00:58:49.281340 | orchestrator | Thursday 27 March 2025 00:55:52 +0000 (0:00:01.703) 0:05:16.457 ******** 2025-03-27 00:58:49.281348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.281366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.281398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.281488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281522 | orchestrator | 2025-03-27 00:58:49.281530 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-27 00:58:49.281538 | orchestrator | Thursday 27 March 2025 00:55:58 +0000 (0:00:06.083) 0:05:22.541 ******** 2025-03-27 00:58:49.281545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281577 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.281584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281622 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.281639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.281646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.281661 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.281668 | orchestrator | 2025-03-27 00:58:49.281675 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-27 00:58:49.281682 | orchestrator | Thursday 27 March 2025 00:55:59 +0000 (0:00:01.122) 0:05:23.663 ******** 2025-03-27 00:58:49.281689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281733 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.281740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281772 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.281779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-27 00:58:49.281807 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.281813 | orchestrator | 2025-03-27 00:58:49.281820 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-27 00:58:49.281827 | orchestrator | Thursday 27 March 2025 00:56:01 +0000 (0:00:01.437) 0:05:25.101 ******** 2025-03-27 00:58:49.281834 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.281840 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.281846 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.281852 | orchestrator | 2025-03-27 00:58:49.281858 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-27 00:58:49.281864 | orchestrator | Thursday 27 March 2025 00:56:02 +0000 (0:00:01.706) 0:05:26.808 ******** 2025-03-27 00:58:49.281870 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.281876 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.281882 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.281888 | orchestrator | 2025-03-27 00:58:49.281895 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-27 00:58:49.281901 | orchestrator | Thursday 27 March 2025 00:56:05 +0000 (0:00:02.664) 0:05:29.472 ******** 2025-03-27 00:58:49.281907 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.281913 | orchestrator | 2025-03-27 00:58:49.281921 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-27 00:58:49.281928 | orchestrator | Thursday 27 March 2025 00:56:07 +0000 (0:00:01.870) 0:05:31.343 ******** 2025-03-27 00:58:49.281934 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-03-27 00:58:49.281941 | orchestrator | 2025-03-27 00:58:49.281947 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-27 00:58:49.281953 | orchestrator | Thursday 27 March 2025 00:56:08 +0000 (0:00:01.365) 0:05:32.708 ******** 2025-03-27 00:58:49.281974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-27 00:58:49.281986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-27 00:58:49.281993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-27 00:58:49.281999 | orchestrator | 2025-03-27 00:58:49.282005 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-27 00:58:49.282028 | orchestrator | Thursday 27 March 2025 00:56:14 +0000 (0:00:05.723) 0:05:38.432 ******** 2025-03-27 00:58:49.282035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282042 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282061 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282074 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282080 | orchestrator | 2025-03-27 00:58:49.282086 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-27 00:58:49.282093 | orchestrator | Thursday 27 March 2025 00:56:16 +0000 (0:00:02.229) 0:05:40.661 ******** 2025-03-27 00:58:49.282099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282115 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282153 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-27 00:58:49.282172 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282178 | orchestrator | 2025-03-27 00:58:49.282184 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-27 00:58:49.282190 | orchestrator | Thursday 27 March 2025 00:56:18 +0000 (0:00:01.961) 0:05:42.623 ******** 2025-03-27 00:58:49.282197 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.282203 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.282209 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.282215 | orchestrator | 2025-03-27 00:58:49.282221 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-27 00:58:49.282227 | orchestrator | Thursday 27 March 2025 00:56:21 +0000 (0:00:03.071) 0:05:45.694 ******** 2025-03-27 00:58:49.282233 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.282239 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.282245 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.282251 | orchestrator | 2025-03-27 00:58:49.282258 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-27 00:58:49.282264 | orchestrator | Thursday 27 March 2025 00:56:25 +0000 (0:00:03.608) 0:05:49.302 ******** 2025-03-27 00:58:49.282275 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-27 00:58:49.282282 | orchestrator | 2025-03-27 00:58:49.282288 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-27 00:58:49.282294 | orchestrator | Thursday 27 March 2025 00:56:26 +0000 (0:00:01.457) 0:05:50.760 ******** 2025-03-27 00:58:49.282300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282306 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282323 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282336 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282342 | orchestrator | 2025-03-27 00:58:49.282348 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-27 00:58:49.282354 | orchestrator | Thursday 27 March 2025 00:56:28 +0000 (0:00:01.658) 0:05:52.418 ******** 2025-03-27 00:58:49.282373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282380 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282399 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-27 00:58:49.282423 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282429 | orchestrator | 2025-03-27 00:58:49.282436 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-27 00:58:49.282442 | orchestrator | Thursday 27 March 2025 00:56:30 +0000 (0:00:01.860) 0:05:54.279 ******** 2025-03-27 00:58:49.282448 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282454 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282463 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282470 | orchestrator | 2025-03-27 00:58:49.282476 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-27 00:58:49.282482 | orchestrator | Thursday 27 March 2025 00:56:32 +0000 (0:00:02.315) 0:05:56.594 ******** 2025-03-27 00:58:49.282488 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.282494 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.282500 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.282507 | orchestrator | 2025-03-27 00:58:49.282513 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-27 00:58:49.282523 | orchestrator | Thursday 27 March 2025 00:56:35 +0000 (0:00:03.198) 0:05:59.792 ******** 2025-03-27 00:58:49.282529 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.282535 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.282541 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.282547 | orchestrator | 2025-03-27 00:58:49.282554 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-27 00:58:49.282560 | orchestrator | Thursday 27 March 2025 00:56:39 +0000 (0:00:03.579) 0:06:03.372 ******** 2025-03-27 00:58:49.282566 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-27 00:58:49.282573 | orchestrator | 2025-03-27 00:58:49.282579 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-27 00:58:49.282585 | orchestrator | Thursday 27 March 2025 00:56:40 +0000 (0:00:01.616) 0:06:04.988 ******** 2025-03-27 00:58:49.282591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282598 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282610 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282638 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282644 | orchestrator | 2025-03-27 00:58:49.282650 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-27 00:58:49.282656 | orchestrator | Thursday 27 March 2025 00:56:42 +0000 (0:00:01.884) 0:06:06.872 ******** 2025-03-27 00:58:49.282663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282669 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282685 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-27 00:58:49.282703 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282710 | orchestrator | 2025-03-27 00:58:49.282716 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-27 00:58:49.282722 | orchestrator | Thursday 27 March 2025 00:56:44 +0000 (0:00:01.502) 0:06:08.375 ******** 2025-03-27 00:58:49.282728 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.282734 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.282740 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.282747 | orchestrator | 2025-03-27 00:58:49.282753 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-27 00:58:49.282759 | orchestrator | Thursday 27 March 2025 00:56:46 +0000 (0:00:02.330) 0:06:10.705 ******** 2025-03-27 00:58:49.282765 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.282771 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.282777 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.282784 | orchestrator | 2025-03-27 00:58:49.282790 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-27 00:58:49.282796 | orchestrator | Thursday 27 March 2025 00:56:49 +0000 (0:00:02.978) 0:06:13.684 ******** 2025-03-27 00:58:49.282802 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.282808 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.282814 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.282821 | orchestrator | 2025-03-27 00:58:49.282827 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-27 00:58:49.282833 | orchestrator | Thursday 27 March 2025 00:56:53 +0000 (0:00:03.774) 0:06:17.459 ******** 2025-03-27 00:58:49.282839 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.282845 | orchestrator | 2025-03-27 00:58:49.282851 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-27 00:58:49.282858 | orchestrator | Thursday 27 March 2025 00:56:55 +0000 (0:00:01.833) 0:06:19.293 ******** 2025-03-27 00:58:49.282877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.282884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.282895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.282901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.282912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.282920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.282926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.282945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.282956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.282963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.282975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.282982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.282988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.283026 | orchestrator | 2025-03-27 00:58:49.283032 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-27 00:58:49.283038 | orchestrator | Thursday 27 March 2025 00:56:59 +0000 (0:00:04.709) 0:06:24.002 ******** 2025-03-27 00:58:49.283050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.283057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.283063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.283100 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.283118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.283125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.283149 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.283181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-27 00:58:49.283188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-27 00:58:49.283201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-27 00:58:49.283208 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283214 | orchestrator | 2025-03-27 00:58:49.283220 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-27 00:58:49.283226 | orchestrator | Thursday 27 March 2025 00:57:00 +0000 (0:00:01.031) 0:06:25.033 ******** 2025-03-27 00:58:49.283232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283249 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283268 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-27 00:58:49.283300 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283306 | orchestrator | 2025-03-27 00:58:49.283313 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-27 00:58:49.283319 | orchestrator | Thursday 27 March 2025 00:57:02 +0000 (0:00:01.434) 0:06:26.467 ******** 2025-03-27 00:58:49.283325 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.283331 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.283337 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.283343 | orchestrator | 2025-03-27 00:58:49.283350 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-27 00:58:49.283356 | orchestrator | Thursday 27 March 2025 00:57:04 +0000 (0:00:01.744) 0:06:28.212 ******** 2025-03-27 00:58:49.283362 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.283368 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.283374 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.283380 | orchestrator | 2025-03-27 00:58:49.283386 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-27 00:58:49.283393 | orchestrator | Thursday 27 March 2025 00:57:06 +0000 (0:00:02.713) 0:06:30.925 ******** 2025-03-27 00:58:49.283399 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.283417 | orchestrator | 2025-03-27 00:58:49.283423 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-27 00:58:49.283429 | orchestrator | Thursday 27 March 2025 00:57:08 +0000 (0:00:01.990) 0:06:32.916 ******** 2025-03-27 00:58:49.283436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 00:58:49.283447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 00:58:49.283458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 00:58:49.283479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 00:58:49.283486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 00:58:49.283498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 00:58:49.283508 | orchestrator | 2025-03-27 00:58:49.283514 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-27 00:58:49.283521 | orchestrator | Thursday 27 March 2025 00:57:15 +0000 (0:00:07.090) 0:06:40.006 ******** 2025-03-27 00:58:49.283540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 00:58:49.283547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 00:58:49.283559 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 00:58:49.283623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 00:58:49.283634 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 00:58:49.283661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 00:58:49.283668 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283674 | orchestrator | 2025-03-27 00:58:49.283681 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-27 00:58:49.283687 | orchestrator | Thursday 27 March 2025 00:57:16 +0000 (0:00:01.039) 0:06:41.046 ******** 2025-03-27 00:58:49.283693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-27 00:58:49.283699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283716 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-27 00:58:49.283731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283744 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-27 00:58:49.283756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-27 00:58:49.283769 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283775 | orchestrator | 2025-03-27 00:58:49.283781 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-27 00:58:49.283787 | orchestrator | Thursday 27 March 2025 00:57:18 +0000 (0:00:01.701) 0:06:42.747 ******** 2025-03-27 00:58:49.283793 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283799 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283806 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283812 | orchestrator | 2025-03-27 00:58:49.283818 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-27 00:58:49.283824 | orchestrator | Thursday 27 March 2025 00:57:19 +0000 (0:00:00.493) 0:06:43.241 ******** 2025-03-27 00:58:49.283830 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.283836 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.283842 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.283848 | orchestrator | 2025-03-27 00:58:49.283854 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-27 00:58:49.283861 | orchestrator | Thursday 27 March 2025 00:57:21 +0000 (0:00:01.858) 0:06:45.099 ******** 2025-03-27 00:58:49.283879 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.283886 | orchestrator | 2025-03-27 00:58:49.283892 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-27 00:58:49.283899 | orchestrator | Thursday 27 March 2025 00:57:23 +0000 (0:00:02.035) 0:06:47.135 ******** 2025-03-27 00:58:49.283905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 00:58:49.283915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.283922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.283929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.283935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.283942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 00:58:49.283964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 00:58:49.283971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.283981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.283988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.283994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 00:58:49.284051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 00:58:49.284109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 00:58:49.284140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284192 | orchestrator | 2025-03-27 00:58:49.284198 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-27 00:58:49.284204 | orchestrator | Thursday 27 March 2025 00:57:28 +0000 (0:00:05.273) 0:06:52.408 ******** 2025-03-27 00:58:49.284214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 00:58:49.284220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.284227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 00:58:49.284259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284291 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 00:58:49.284309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.284316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 00:58:49.284347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284383 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 00:58:49.284395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 00:58:49.284449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 00:58:49.284480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 00:58:49.284487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 00:58:49.284513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 00:58:49.284520 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284526 | orchestrator | 2025-03-27 00:58:49.284532 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-27 00:58:49.284538 | orchestrator | Thursday 27 March 2025 00:57:30 +0000 (0:00:01.744) 0:06:54.153 ******** 2025-03-27 00:58:49.284545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284574 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284613 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-27 00:58:49.284638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-27 00:58:49.284653 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284659 | orchestrator | 2025-03-27 00:58:49.284665 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-27 00:58:49.284672 | orchestrator | Thursday 27 March 2025 00:57:31 +0000 (0:00:01.783) 0:06:55.937 ******** 2025-03-27 00:58:49.284678 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284684 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284690 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284696 | orchestrator | 2025-03-27 00:58:49.284702 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-27 00:58:49.284708 | orchestrator | Thursday 27 March 2025 00:57:32 +0000 (0:00:00.814) 0:06:56.752 ******** 2025-03-27 00:58:49.284715 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284721 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284727 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284733 | orchestrator | 2025-03-27 00:58:49.284739 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-27 00:58:49.284745 | orchestrator | Thursday 27 March 2025 00:57:34 +0000 (0:00:02.105) 0:06:58.858 ******** 2025-03-27 00:58:49.284751 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.284757 | orchestrator | 2025-03-27 00:58:49.284763 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-27 00:58:49.284769 | orchestrator | Thursday 27 March 2025 00:57:36 +0000 (0:00:02.105) 0:07:00.963 ******** 2025-03-27 00:58:49.284776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:58:49.284787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:58:49.284796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-27 00:58:49.284803 | orchestrator | 2025-03-27 00:58:49.284809 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-27 00:58:49.284815 | orchestrator | Thursday 27 March 2025 00:57:39 +0000 (0:00:02.978) 0:07:03.942 ******** 2025-03-27 00:58:49.284821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-27 00:58:49.284828 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-27 00:58:49.284846 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-27 00:58:49.284859 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284865 | orchestrator | 2025-03-27 00:58:49.284872 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-27 00:58:49.284878 | orchestrator | Thursday 27 March 2025 00:57:40 +0000 (0:00:00.752) 0:07:04.694 ******** 2025-03-27 00:58:49.284884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-27 00:58:49.284890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-27 00:58:49.284896 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284902 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-27 00:58:49.284916 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284922 | orchestrator | 2025-03-27 00:58:49.284928 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-27 00:58:49.284933 | orchestrator | Thursday 27 March 2025 00:57:41 +0000 (0:00:01.238) 0:07:05.933 ******** 2025-03-27 00:58:49.284939 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284945 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284951 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284956 | orchestrator | 2025-03-27 00:58:49.284962 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-27 00:58:49.284968 | orchestrator | Thursday 27 March 2025 00:57:42 +0000 (0:00:00.472) 0:07:06.405 ******** 2025-03-27 00:58:49.284974 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.284979 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.284985 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.284991 | orchestrator | 2025-03-27 00:58:49.284997 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-27 00:58:49.285003 | orchestrator | Thursday 27 March 2025 00:57:44 +0000 (0:00:01.842) 0:07:08.248 ******** 2025-03-27 00:58:49.285008 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 00:58:49.285017 | orchestrator | 2025-03-27 00:58:49.285023 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-27 00:58:49.285029 | orchestrator | Thursday 27 March 2025 00:57:46 +0000 (0:00:02.091) 0:07:10.340 ******** 2025-03-27 00:58:49.285035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-27 00:58:49.285078 | orchestrator | 2025-03-27 00:58:49.285084 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-27 00:58:49.285090 | orchestrator | Thursday 27 March 2025 00:57:54 +0000 (0:00:08.697) 0:07:19.038 ******** 2025-03-27 00:58:49.285102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285123 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285145 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-27 00:58:49.285188 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285204 | orchestrator | 2025-03-27 00:58:49.285214 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-27 00:58:49.285224 | orchestrator | Thursday 27 March 2025 00:57:56 +0000 (0:00:01.362) 0:07:20.400 ******** 2025-03-27 00:58:49.285235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285275 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285328 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-27 00:58:49.285378 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285389 | orchestrator | 2025-03-27 00:58:49.285399 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-27 00:58:49.285424 | orchestrator | Thursday 27 March 2025 00:57:57 +0000 (0:00:01.601) 0:07:22.002 ******** 2025-03-27 00:58:49.285435 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.285445 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.285456 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.285467 | orchestrator | 2025-03-27 00:58:49.285478 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-27 00:58:49.285497 | orchestrator | Thursday 27 March 2025 00:57:59 +0000 (0:00:01.616) 0:07:23.619 ******** 2025-03-27 00:58:49.285507 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.285517 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.285526 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.285536 | orchestrator | 2025-03-27 00:58:49.285543 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-27 00:58:49.285552 | orchestrator | Thursday 27 March 2025 00:58:02 +0000 (0:00:02.748) 0:07:26.368 ******** 2025-03-27 00:58:49.285563 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285570 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285582 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285593 | orchestrator | 2025-03-27 00:58:49.285599 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-27 00:58:49.285609 | orchestrator | Thursday 27 March 2025 00:58:02 +0000 (0:00:00.394) 0:07:26.762 ******** 2025-03-27 00:58:49.285619 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285625 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285634 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285645 | orchestrator | 2025-03-27 00:58:49.285651 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-27 00:58:49.285660 | orchestrator | Thursday 27 March 2025 00:58:03 +0000 (0:00:00.642) 0:07:27.404 ******** 2025-03-27 00:58:49.285669 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285676 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285683 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285692 | orchestrator | 2025-03-27 00:58:49.285701 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-27 00:58:49.285707 | orchestrator | Thursday 27 March 2025 00:58:03 +0000 (0:00:00.625) 0:07:28.030 ******** 2025-03-27 00:58:49.285717 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285726 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285732 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285742 | orchestrator | 2025-03-27 00:58:49.285751 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-27 00:58:49.285757 | orchestrator | Thursday 27 March 2025 00:58:04 +0000 (0:00:00.625) 0:07:28.656 ******** 2025-03-27 00:58:49.285766 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285776 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285782 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285790 | orchestrator | 2025-03-27 00:58:49.285799 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-27 00:58:49.285808 | orchestrator | Thursday 27 March 2025 00:58:04 +0000 (0:00:00.358) 0:07:29.015 ******** 2025-03-27 00:58:49.285817 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.285826 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.285836 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.285842 | orchestrator | 2025-03-27 00:58:49.285851 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-27 00:58:49.285861 | orchestrator | Thursday 27 March 2025 00:58:06 +0000 (0:00:01.232) 0:07:30.248 ******** 2025-03-27 00:58:49.285867 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.285876 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.285886 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.285894 | orchestrator | 2025-03-27 00:58:49.285903 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-27 00:58:49.285913 | orchestrator | Thursday 27 March 2025 00:58:07 +0000 (0:00:01.000) 0:07:31.248 ******** 2025-03-27 00:58:49.285920 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.285929 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.285938 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.285946 | orchestrator | 2025-03-27 00:58:49.285953 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-27 00:58:49.285962 | orchestrator | Thursday 27 March 2025 00:58:07 +0000 (0:00:00.377) 0:07:31.626 ******** 2025-03-27 00:58:49.285975 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.285984 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.285995 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286001 | orchestrator | 2025-03-27 00:58:49.286009 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-27 00:58:49.286040 | orchestrator | Thursday 27 March 2025 00:58:08 +0000 (0:00:01.428) 0:07:33.055 ******** 2025-03-27 00:58:49.286052 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286058 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286066 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286075 | orchestrator | 2025-03-27 00:58:49.286085 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-27 00:58:49.286091 | orchestrator | Thursday 27 March 2025 00:58:10 +0000 (0:00:01.407) 0:07:34.462 ******** 2025-03-27 00:58:49.286100 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286111 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286117 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286125 | orchestrator | 2025-03-27 00:58:49.286134 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-27 00:58:49.286143 | orchestrator | Thursday 27 March 2025 00:58:11 +0000 (0:00:01.039) 0:07:35.501 ******** 2025-03-27 00:58:49.286150 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.286159 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.286168 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.286176 | orchestrator | 2025-03-27 00:58:49.286184 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-27 00:58:49.286193 | orchestrator | Thursday 27 March 2025 00:58:17 +0000 (0:00:05.638) 0:07:41.140 ******** 2025-03-27 00:58:49.286201 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286207 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286216 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286226 | orchestrator | 2025-03-27 00:58:49.286233 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-27 00:58:49.286242 | orchestrator | Thursday 27 March 2025 00:58:20 +0000 (0:00:03.200) 0:07:44.341 ******** 2025-03-27 00:58:49.286252 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.286258 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.286267 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.286277 | orchestrator | 2025-03-27 00:58:49.286283 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-03-27 00:58:49.286291 | orchestrator | Thursday 27 March 2025 00:58:26 +0000 (0:00:06.631) 0:07:50.972 ******** 2025-03-27 00:58:49.286300 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286310 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286316 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286325 | orchestrator | 2025-03-27 00:58:49.286335 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-27 00:58:49.286346 | orchestrator | Thursday 27 March 2025 00:58:30 +0000 (0:00:03.770) 0:07:54.742 ******** 2025-03-27 00:58:49.286355 | orchestrator | changed: [testbed-node-0] 2025-03-27 00:58:49.286365 | orchestrator | changed: [testbed-node-1] 2025-03-27 00:58:49.286374 | orchestrator | changed: [testbed-node-2] 2025-03-27 00:58:49.286381 | orchestrator | 2025-03-27 00:58:49.286395 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-27 00:58:49.286417 | orchestrator | Thursday 27 March 2025 00:58:35 +0000 (0:00:05.268) 0:08:00.011 ******** 2025-03-27 00:58:49.286427 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286435 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286442 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286452 | orchestrator | 2025-03-27 00:58:49.286460 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-27 00:58:49.286467 | orchestrator | Thursday 27 March 2025 00:58:36 +0000 (0:00:00.681) 0:08:00.693 ******** 2025-03-27 00:58:49.286476 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286490 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286498 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286507 | orchestrator | 2025-03-27 00:58:49.286518 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-27 00:58:49.286524 | orchestrator | Thursday 27 March 2025 00:58:37 +0000 (0:00:00.639) 0:08:01.333 ******** 2025-03-27 00:58:49.286532 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286543 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286549 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286557 | orchestrator | 2025-03-27 00:58:49.286566 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-27 00:58:49.286576 | orchestrator | Thursday 27 March 2025 00:58:37 +0000 (0:00:00.395) 0:08:01.728 ******** 2025-03-27 00:58:49.286582 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286592 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286602 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286608 | orchestrator | 2025-03-27 00:58:49.286617 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-27 00:58:49.286627 | orchestrator | Thursday 27 March 2025 00:58:38 +0000 (0:00:00.686) 0:08:02.414 ******** 2025-03-27 00:58:49.286634 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286642 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286651 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286659 | orchestrator | 2025-03-27 00:58:49.286666 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-27 00:58:49.286675 | orchestrator | Thursday 27 March 2025 00:58:39 +0000 (0:00:00.712) 0:08:03.127 ******** 2025-03-27 00:58:49.286686 | orchestrator | skipping: [testbed-node-0] 2025-03-27 00:58:49.286692 | orchestrator | skipping: [testbed-node-1] 2025-03-27 00:58:49.286700 | orchestrator | skipping: [testbed-node-2] 2025-03-27 00:58:49.286710 | orchestrator | 2025-03-27 00:58:49.286718 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-27 00:58:49.286725 | orchestrator | Thursday 27 March 2025 00:58:39 +0000 (0:00:00.432) 0:08:03.560 ******** 2025-03-27 00:58:49.286734 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286745 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286752 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286761 | orchestrator | 2025-03-27 00:58:49.286771 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-27 00:58:49.286778 | orchestrator | Thursday 27 March 2025 00:58:44 +0000 (0:00:05.117) 0:08:08.677 ******** 2025-03-27 00:58:49.286788 | orchestrator | ok: [testbed-node-1] 2025-03-27 00:58:49.286799 | orchestrator | ok: [testbed-node-0] 2025-03-27 00:58:49.286809 | orchestrator | ok: [testbed-node-2] 2025-03-27 00:58:49.286819 | orchestrator | 2025-03-27 00:58:49.286829 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 00:58:49.286835 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-27 00:58:49.286845 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-27 00:58:49.286855 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-27 00:58:49.286862 | orchestrator | 2025-03-27 00:58:49.286870 | orchestrator | 2025-03-27 00:58:49.286879 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 00:58:49.286887 | orchestrator | Thursday 27 March 2025 00:58:45 +0000 (0:00:01.257) 0:08:09.935 ******** 2025-03-27 00:58:49.286893 | orchestrator | =============================================================================== 2025-03-27 00:58:49.286903 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.70s 2025-03-27 00:58:49.286912 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.88s 2025-03-27 00:58:49.286924 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 7.15s 2025-03-27 00:58:49.286934 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.09s 2025-03-27 00:58:49.286944 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 6.67s 2025-03-27 00:58:49.286950 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.63s 2025-03-27 00:58:49.286959 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.57s 2025-03-27 00:58:49.286968 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.56s 2025-03-27 00:58:49.286977 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.43s 2025-03-27 00:58:49.286984 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.39s 2025-03-27 00:58:49.286994 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.08s 2025-03-27 00:58:49.287005 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.72s 2025-03-27 00:58:49.287013 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.67s 2025-03-27 00:58:49.287022 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.64s 2025-03-27 00:58:49.287033 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 5.57s 2025-03-27 00:58:52.325653 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.45s 2025-03-27 00:58:52.325774 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.36s 2025-03-27 00:58:52.325795 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.33s 2025-03-27 00:58:52.325811 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.27s 2025-03-27 00:58:52.325826 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.27s 2025-03-27 00:58:52.325841 | orchestrator | 2025-03-27 00:58:49 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:58:52.325857 | orchestrator | 2025-03-27 00:58:49 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:52.325872 | orchestrator | 2025-03-27 00:58:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:52.325887 | orchestrator | 2025-03-27 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:52.325920 | orchestrator | 2025-03-27 00:58:52 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:58:52.327603 | orchestrator | 2025-03-27 00:58:52 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:58:52.332246 | orchestrator | 2025-03-27 00:58:52 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:52.333717 | orchestrator | 2025-03-27 00:58:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:55.384056 | orchestrator | 2025-03-27 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:55.384189 | orchestrator | 2025-03-27 00:58:55 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:58:55.387243 | orchestrator | 2025-03-27 00:58:55 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:58:55.387279 | orchestrator | 2025-03-27 00:58:55 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:58.449106 | orchestrator | 2025-03-27 00:58:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:58.449300 | orchestrator | 2025-03-27 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:58:58.449334 | orchestrator | 2025-03-27 00:58:58 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:58:58.450236 | orchestrator | 2025-03-27 00:58:58 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:58:58.450268 | orchestrator | 2025-03-27 00:58:58 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:58:58.451066 | orchestrator | 2025-03-27 00:58:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:58:58.451270 | orchestrator | 2025-03-27 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:01.497299 | orchestrator | 2025-03-27 00:59:01 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:01.500167 | orchestrator | 2025-03-27 00:59:01 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:01.500213 | orchestrator | 2025-03-27 00:59:01 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:01.500928 | orchestrator | 2025-03-27 00:59:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:04.550550 | orchestrator | 2025-03-27 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:04.550676 | orchestrator | 2025-03-27 00:59:04 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:04.551484 | orchestrator | 2025-03-27 00:59:04 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:04.553466 | orchestrator | 2025-03-27 00:59:04 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:04.553498 | orchestrator | 2025-03-27 00:59:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:04.553833 | orchestrator | 2025-03-27 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:07.600765 | orchestrator | 2025-03-27 00:59:07 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:07.601294 | orchestrator | 2025-03-27 00:59:07 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:07.602185 | orchestrator | 2025-03-27 00:59:07 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:07.603497 | orchestrator | 2025-03-27 00:59:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:07.603608 | orchestrator | 2025-03-27 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:10.659869 | orchestrator | 2025-03-27 00:59:10 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:10.662532 | orchestrator | 2025-03-27 00:59:10 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:10.664242 | orchestrator | 2025-03-27 00:59:10 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:10.666295 | orchestrator | 2025-03-27 00:59:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:10.666584 | orchestrator | 2025-03-27 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:13.709628 | orchestrator | 2025-03-27 00:59:13 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:13.711067 | orchestrator | 2025-03-27 00:59:13 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:13.712721 | orchestrator | 2025-03-27 00:59:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:13.714908 | orchestrator | 2025-03-27 00:59:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:16.760785 | orchestrator | 2025-03-27 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:16.760902 | orchestrator | 2025-03-27 00:59:16 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:16.762518 | orchestrator | 2025-03-27 00:59:16 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:16.763923 | orchestrator | 2025-03-27 00:59:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:16.764559 | orchestrator | 2025-03-27 00:59:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:16.764728 | orchestrator | 2025-03-27 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:19.808623 | orchestrator | 2025-03-27 00:59:19 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:19.808946 | orchestrator | 2025-03-27 00:59:19 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:19.809882 | orchestrator | 2025-03-27 00:59:19 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:19.810726 | orchestrator | 2025-03-27 00:59:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:22.865466 | orchestrator | 2025-03-27 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:22.865715 | orchestrator | 2025-03-27 00:59:22 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:22.868163 | orchestrator | 2025-03-27 00:59:22 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:22.868201 | orchestrator | 2025-03-27 00:59:22 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:22.869047 | orchestrator | 2025-03-27 00:59:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:22.869384 | orchestrator | 2025-03-27 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:25.913897 | orchestrator | 2025-03-27 00:59:25 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:25.914583 | orchestrator | 2025-03-27 00:59:25 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:25.915894 | orchestrator | 2025-03-27 00:59:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:25.917051 | orchestrator | 2025-03-27 00:59:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:28.964100 | orchestrator | 2025-03-27 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:28.964233 | orchestrator | 2025-03-27 00:59:28 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:28.965041 | orchestrator | 2025-03-27 00:59:28 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:28.965600 | orchestrator | 2025-03-27 00:59:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:28.967669 | orchestrator | 2025-03-27 00:59:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:32.022180 | orchestrator | 2025-03-27 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:32.022315 | orchestrator | 2025-03-27 00:59:32 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:32.022709 | orchestrator | 2025-03-27 00:59:32 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:32.025036 | orchestrator | 2025-03-27 00:59:32 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:32.026682 | orchestrator | 2025-03-27 00:59:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:35.086737 | orchestrator | 2025-03-27 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:35.086866 | orchestrator | 2025-03-27 00:59:35 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:35.088331 | orchestrator | 2025-03-27 00:59:35 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:35.094678 | orchestrator | 2025-03-27 00:59:35 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:38.136978 | orchestrator | 2025-03-27 00:59:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:38.137105 | orchestrator | 2025-03-27 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:38.137141 | orchestrator | 2025-03-27 00:59:38 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:38.138362 | orchestrator | 2025-03-27 00:59:38 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:38.140727 | orchestrator | 2025-03-27 00:59:38 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:38.141981 | orchestrator | 2025-03-27 00:59:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:38.142518 | orchestrator | 2025-03-27 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:41.189167 | orchestrator | 2025-03-27 00:59:41 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:41.193269 | orchestrator | 2025-03-27 00:59:41 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:41.193956 | orchestrator | 2025-03-27 00:59:41 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:41.194006 | orchestrator | 2025-03-27 00:59:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:44.246895 | orchestrator | 2025-03-27 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:44.247042 | orchestrator | 2025-03-27 00:59:44 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:44.249214 | orchestrator | 2025-03-27 00:59:44 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:44.250514 | orchestrator | 2025-03-27 00:59:44 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:44.251693 | orchestrator | 2025-03-27 00:59:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:44.252679 | orchestrator | 2025-03-27 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:47.305346 | orchestrator | 2025-03-27 00:59:47 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:47.306109 | orchestrator | 2025-03-27 00:59:47 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:47.306156 | orchestrator | 2025-03-27 00:59:47 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:47.306629 | orchestrator | 2025-03-27 00:59:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:50.379593 | orchestrator | 2025-03-27 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:50.379733 | orchestrator | 2025-03-27 00:59:50 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:50.380824 | orchestrator | 2025-03-27 00:59:50 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:50.382776 | orchestrator | 2025-03-27 00:59:50 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:50.384131 | orchestrator | 2025-03-27 00:59:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:53.434513 | orchestrator | 2025-03-27 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:53.434649 | orchestrator | 2025-03-27 00:59:53 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:53.436354 | orchestrator | 2025-03-27 00:59:53 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:53.438012 | orchestrator | 2025-03-27 00:59:53 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:53.441519 | orchestrator | 2025-03-27 00:59:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:56.506986 | orchestrator | 2025-03-27 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:56.507126 | orchestrator | 2025-03-27 00:59:56 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:56.509265 | orchestrator | 2025-03-27 00:59:56 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:56.511219 | orchestrator | 2025-03-27 00:59:56 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:56.513477 | orchestrator | 2025-03-27 00:59:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 00:59:56.513988 | orchestrator | 2025-03-27 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 00:59:59.569399 | orchestrator | 2025-03-27 00:59:59 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 00:59:59.572179 | orchestrator | 2025-03-27 00:59:59 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 00:59:59.573156 | orchestrator | 2025-03-27 00:59:59 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 00:59:59.574168 | orchestrator | 2025-03-27 00:59:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:02.631851 | orchestrator | 2025-03-27 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:02.632698 | orchestrator | 2025-03-27 01:00:02 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:02.633050 | orchestrator | 2025-03-27 01:00:02 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:02.635179 | orchestrator | 2025-03-27 01:00:02 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:02.635888 | orchestrator | 2025-03-27 01:00:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:05.688505 | orchestrator | 2025-03-27 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:05.688638 | orchestrator | 2025-03-27 01:00:05 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:05.689639 | orchestrator | 2025-03-27 01:00:05 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:05.691413 | orchestrator | 2025-03-27 01:00:05 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:05.693115 | orchestrator | 2025-03-27 01:00:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:05.695414 | orchestrator | 2025-03-27 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:08.744192 | orchestrator | 2025-03-27 01:00:08 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:08.745842 | orchestrator | 2025-03-27 01:00:08 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:08.748171 | orchestrator | 2025-03-27 01:00:08 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:08.749853 | orchestrator | 2025-03-27 01:00:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:11.806308 | orchestrator | 2025-03-27 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:11.806492 | orchestrator | 2025-03-27 01:00:11 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:11.808517 | orchestrator | 2025-03-27 01:00:11 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:11.810120 | orchestrator | 2025-03-27 01:00:11 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:11.812057 | orchestrator | 2025-03-27 01:00:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:14.869180 | orchestrator | 2025-03-27 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:14.869313 | orchestrator | 2025-03-27 01:00:14 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:14.870697 | orchestrator | 2025-03-27 01:00:14 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:14.871753 | orchestrator | 2025-03-27 01:00:14 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:14.873733 | orchestrator | 2025-03-27 01:00:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:17.934577 | orchestrator | 2025-03-27 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:17.934702 | orchestrator | 2025-03-27 01:00:17 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:17.938254 | orchestrator | 2025-03-27 01:00:17 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:17.941398 | orchestrator | 2025-03-27 01:00:17 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:17.941457 | orchestrator | 2025-03-27 01:00:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:17.942245 | orchestrator | 2025-03-27 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:21.007022 | orchestrator | 2025-03-27 01:00:21 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:21.009059 | orchestrator | 2025-03-27 01:00:21 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:21.010279 | orchestrator | 2025-03-27 01:00:21 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:21.015362 | orchestrator | 2025-03-27 01:00:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:24.072511 | orchestrator | 2025-03-27 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:24.072711 | orchestrator | 2025-03-27 01:00:24 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:24.074933 | orchestrator | 2025-03-27 01:00:24 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:24.077315 | orchestrator | 2025-03-27 01:00:24 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:24.078999 | orchestrator | 2025-03-27 01:00:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:24.079270 | orchestrator | 2025-03-27 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:27.141143 | orchestrator | 2025-03-27 01:00:27 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:27.146509 | orchestrator | 2025-03-27 01:00:27 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:27.148356 | orchestrator | 2025-03-27 01:00:27 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:27.148403 | orchestrator | 2025-03-27 01:00:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:30.196876 | orchestrator | 2025-03-27 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:30.197013 | orchestrator | 2025-03-27 01:00:30 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:30.199959 | orchestrator | 2025-03-27 01:00:30 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:30.201758 | orchestrator | 2025-03-27 01:00:30 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:30.203765 | orchestrator | 2025-03-27 01:00:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:33.258557 | orchestrator | 2025-03-27 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:33.258680 | orchestrator | 2025-03-27 01:00:33 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:33.259626 | orchestrator | 2025-03-27 01:00:33 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:33.259656 | orchestrator | 2025-03-27 01:00:33 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:33.263640 | orchestrator | 2025-03-27 01:00:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:36.315481 | orchestrator | 2025-03-27 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:36.315599 | orchestrator | 2025-03-27 01:00:36 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:36.316992 | orchestrator | 2025-03-27 01:00:36 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:36.318282 | orchestrator | 2025-03-27 01:00:36 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:36.319507 | orchestrator | 2025-03-27 01:00:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:39.393243 | orchestrator | 2025-03-27 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:39.393375 | orchestrator | 2025-03-27 01:00:39 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:39.398795 | orchestrator | 2025-03-27 01:00:39 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:39.405253 | orchestrator | 2025-03-27 01:00:39 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:42.455807 | orchestrator | 2025-03-27 01:00:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:42.455928 | orchestrator | 2025-03-27 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:42.455964 | orchestrator | 2025-03-27 01:00:42 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:42.456716 | orchestrator | 2025-03-27 01:00:42 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:42.456767 | orchestrator | 2025-03-27 01:00:42 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:42.457553 | orchestrator | 2025-03-27 01:00:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:45.525652 | orchestrator | 2025-03-27 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:45.525796 | orchestrator | 2025-03-27 01:00:45 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:48.583802 | orchestrator | 2025-03-27 01:00:45 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:48.583918 | orchestrator | 2025-03-27 01:00:45 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:48.583938 | orchestrator | 2025-03-27 01:00:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:48.583954 | orchestrator | 2025-03-27 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:48.583988 | orchestrator | 2025-03-27 01:00:48 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:48.586548 | orchestrator | 2025-03-27 01:00:48 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:48.587600 | orchestrator | 2025-03-27 01:00:48 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:48.589464 | orchestrator | 2025-03-27 01:00:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:48.589662 | orchestrator | 2025-03-27 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:51.643791 | orchestrator | 2025-03-27 01:00:51 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:51.644399 | orchestrator | 2025-03-27 01:00:51 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:51.645573 | orchestrator | 2025-03-27 01:00:51 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:51.648044 | orchestrator | 2025-03-27 01:00:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:54.696521 | orchestrator | 2025-03-27 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:54.696655 | orchestrator | 2025-03-27 01:00:54 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:54.699394 | orchestrator | 2025-03-27 01:00:54 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state STARTED 2025-03-27 01:00:54.703534 | orchestrator | 2025-03-27 01:00:54 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:00:54.706299 | orchestrator | 2025-03-27 01:00:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:00:54.706506 | orchestrator | 2025-03-27 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:00:57.757581 | orchestrator | 2025-03-27 01:00:57 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:00:57.760696 | orchestrator | 2025-03-27 01:00:57.760748 | orchestrator | 2025-03-27 01:00:57.760764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:00:57.760779 | orchestrator | 2025-03-27 01:00:57.760793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:00:57.760807 | orchestrator | Thursday 27 March 2025 00:58:50 +0000 (0:00:00.341) 0:00:00.341 ******** 2025-03-27 01:00:57.760822 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:00:57.760859 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:00:57.760874 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:00:57.760888 | orchestrator | 2025-03-27 01:00:57.760988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:00:57.761003 | orchestrator | Thursday 27 March 2025 00:58:50 +0000 (0:00:00.428) 0:00:00.769 ******** 2025-03-27 01:00:57.761018 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-03-27 01:00:57.761033 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-03-27 01:00:57.761047 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-03-27 01:00:57.761061 | orchestrator | 2025-03-27 01:00:57.761074 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-03-27 01:00:57.761088 | orchestrator | 2025-03-27 01:00:57.761102 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-27 01:00:57.761116 | orchestrator | Thursday 27 March 2025 00:58:50 +0000 (0:00:00.317) 0:00:01.087 ******** 2025-03-27 01:00:57.761130 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:00:57.761144 | orchestrator | 2025-03-27 01:00:57.761158 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-03-27 01:00:57.761171 | orchestrator | Thursday 27 March 2025 00:58:51 +0000 (0:00:00.807) 0:00:01.894 ******** 2025-03-27 01:00:57.761186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 01:00:57.761200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 01:00:57.761213 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-27 01:00:57.761227 | orchestrator | 2025-03-27 01:00:57.761241 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-03-27 01:00:57.761254 | orchestrator | Thursday 27 March 2025 00:58:52 +0000 (0:00:00.836) 0:00:02.730 ******** 2025-03-27 01:00:57.761271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761465 | orchestrator | 2025-03-27 01:00:57.761481 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-27 01:00:57.761495 | orchestrator | Thursday 27 March 2025 00:58:54 +0000 (0:00:01.588) 0:00:04.319 ******** 2025-03-27 01:00:57.761519 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:00:57.761533 | orchestrator | 2025-03-27 01:00:57.761547 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-03-27 01:00:57.761561 | orchestrator | Thursday 27 March 2025 00:58:54 +0000 (0:00:00.788) 0:00:05.108 ******** 2025-03-27 01:00:57.761587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.761638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.761714 | orchestrator | 2025-03-27 01:00:57.761731 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-03-27 01:00:57.761747 | orchestrator | Thursday 27 March 2025 00:58:58 +0000 (0:00:04.015) 0:00:09.123 ******** 2025-03-27 01:00:57.761764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.761789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.761813 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:00:57.761837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.761855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.761871 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:00:57.761888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.761914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.761937 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:00:57.761954 | orchestrator | 2025-03-27 01:00:57.761969 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-03-27 01:00:57.761988 | orchestrator | Thursday 27 March 2025 00:58:59 +0000 (0:00:01.005) 0:00:10.129 ******** 2025-03-27 01:00:57.762009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.762075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.762090 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:00:57.762105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.762130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.762152 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:00:57.762173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-27 01:00:57.762189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-27 01:00:57.762203 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:00:57.762217 | orchestrator | 2025-03-27 01:00:57.762231 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-03-27 01:00:57.762246 | orchestrator | Thursday 27 March 2025 00:59:01 +0000 (0:00:01.520) 0:00:11.649 ******** 2025-03-27 01:00:57.762260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762396 | orchestrator | 2025-03-27 01:00:57.762411 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-03-27 01:00:57.762425 | orchestrator | Thursday 27 March 2025 00:59:04 +0000 (0:00:02.706) 0:00:14.356 ******** 2025-03-27 01:00:57.762472 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:00:57.762487 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:00:57.762501 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:00:57.762514 | orchestrator | 2025-03-27 01:00:57.762528 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-03-27 01:00:57.762542 | orchestrator | Thursday 27 March 2025 00:59:08 +0000 (0:00:04.677) 0:00:19.033 ******** 2025-03-27 01:00:57.762556 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:00:57.762569 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:00:57.762583 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:00:57.762597 | orchestrator | 2025-03-27 01:00:57.762611 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-03-27 01:00:57.762625 | orchestrator | Thursday 27 March 2025 00:59:10 +0000 (0:00:01.943) 0:00:20.977 ******** 2025-03-27 01:00:57.762647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-03-27 01:00:57 | INFO  | Task 7e2a2e78-7b5d-4344-b852-281998ead47a is in state SUCCESS 2025-03-27 01:00:57.762664 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-27 01:00:57.762718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-27 01:00:57.762786 | orchestrator | 2025-03-27 01:00:57.762801 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-27 01:00:57.762815 | orchestrator | Thursday 27 March 2025 00:59:14 +0000 (0:00:03.415) 0:00:24.393 ******** 2025-03-27 01:00:57.762836 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:00:57.762850 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:00:57.763011 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:00:57.763031 | orchestrator | 2025-03-27 01:00:57.763045 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-27 01:00:57.763059 | orchestrator | Thursday 27 March 2025 00:59:14 +0000 (0:00:00.527) 0:00:24.920 ******** 2025-03-27 01:00:57.763073 | orchestrator | 2025-03-27 01:00:57.763087 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-27 01:00:57.763101 | orchestrator | Thursday 27 March 2025 00:59:14 +0000 (0:00:00.217) 0:00:25.138 ******** 2025-03-27 01:00:57.763115 | orchestrator | 2025-03-27 01:00:57.763129 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-03-27 01:00:57.763143 | orchestrator | Thursday 27 March 2025 00:59:15 +0000 (0:00:00.057) 0:00:25.195 ******** 2025-03-27 01:00:57.763156 | orchestrator | 2025-03-27 01:00:57.763170 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-03-27 01:00:57.763184 | orchestrator | Thursday 27 March 2025 00:59:15 +0000 (0:00:00.067) 0:00:25.262 ******** 2025-03-27 01:00:57.763198 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:00:57.763212 | orchestrator | 2025-03-27 01:00:57.763225 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-03-27 01:00:57.763239 | orchestrator | Thursday 27 March 2025 00:59:15 +0000 (0:00:00.350) 0:00:25.613 ******** 2025-03-27 01:00:57.763253 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:00:57.763267 | orchestrator | 2025-03-27 01:00:57.763281 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-03-27 01:00:57.763295 | orchestrator | Thursday 27 March 2025 00:59:16 +0000 (0:00:00.663) 0:00:26.277 ******** 2025-03-27 01:00:57.763309 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:00:57.763323 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:00:57.763337 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:00:57.763350 | orchestrator | 2025-03-27 01:00:57.763364 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-03-27 01:00:57.763378 | orchestrator | Thursday 27 March 2025 00:59:47 +0000 (0:00:31.186) 0:00:57.463 ******** 2025-03-27 01:00:57.763392 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:00:57.763405 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:00:57.763419 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:00:57.763488 | orchestrator | 2025-03-27 01:00:57.763504 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-03-27 01:00:57.763518 | orchestrator | Thursday 27 March 2025 01:00:41 +0000 (0:00:54.468) 0:01:51.932 ******** 2025-03-27 01:00:57.763532 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:00:57.763546 | orchestrator | 2025-03-27 01:00:57.763560 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-03-27 01:00:57.763574 | orchestrator | Thursday 27 March 2025 01:00:42 +0000 (0:00:00.810) 0:01:52.742 ******** 2025-03-27 01:00:57.763588 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:00:57.763602 | orchestrator | 2025-03-27 01:00:57.763616 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-03-27 01:00:57.763630 | orchestrator | Thursday 27 March 2025 01:00:45 +0000 (0:00:02.843) 0:01:55.586 ******** 2025-03-27 01:00:57.763644 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:00:57.763660 | orchestrator | 2025-03-27 01:00:57.763677 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-03-27 01:00:57.763700 | orchestrator | Thursday 27 March 2025 01:00:48 +0000 (0:00:02.730) 0:01:58.316 ******** 2025-03-27 01:00:57.763715 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:00:57.763730 | orchestrator | 2025-03-27 01:00:57.763744 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-03-27 01:00:57.763766 | orchestrator | Thursday 27 March 2025 01:00:51 +0000 (0:00:03.281) 0:02:01.598 ******** 2025-03-27 01:01:00.810804 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:00.810910 | orchestrator | 2025-03-27 01:01:00.810929 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:01:00.810945 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 01:01:00.810961 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 01:01:00.810975 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-27 01:01:00.810989 | orchestrator | 2025-03-27 01:01:00.811003 | orchestrator | 2025-03-27 01:01:00.811017 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:01:00.811030 | orchestrator | Thursday 27 March 2025 01:00:54 +0000 (0:00:03.233) 0:02:04.831 ******** 2025-03-27 01:01:00.811044 | orchestrator | =============================================================================== 2025-03-27 01:01:00.811058 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 54.47s 2025-03-27 01:01:00.811071 | orchestrator | opensearch : Restart opensearch container ------------------------------ 31.19s 2025-03-27 01:01:00.811085 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.68s 2025-03-27 01:01:00.811099 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 4.02s 2025-03-27 01:01:00.811112 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.42s 2025-03-27 01:01:00.811126 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.28s 2025-03-27 01:01:00.811140 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.23s 2025-03-27 01:01:00.811154 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.84s 2025-03-27 01:01:00.811167 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.73s 2025-03-27 01:01:00.811181 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.71s 2025-03-27 01:01:00.811195 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.94s 2025-03-27 01:01:00.811208 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.59s 2025-03-27 01:01:00.811222 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.52s 2025-03-27 01:01:00.811235 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.01s 2025-03-27 01:01:00.811250 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.84s 2025-03-27 01:01:00.811263 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.81s 2025-03-27 01:01:00.811277 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.81s 2025-03-27 01:01:00.811290 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2025-03-27 01:01:00.811304 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.66s 2025-03-27 01:01:00.811318 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-03-27 01:01:00.811332 | orchestrator | 2025-03-27 01:00:57 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:00.811346 | orchestrator | 2025-03-27 01:00:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:00.811363 | orchestrator | 2025-03-27 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:00.811394 | orchestrator | 2025-03-27 01:01:00 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:00.813905 | orchestrator | 2025-03-27 01:01:00 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:00.816033 | orchestrator | 2025-03-27 01:01:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:03.879349 | orchestrator | 2025-03-27 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:03.879507 | orchestrator | 2025-03-27 01:01:03 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:03.880400 | orchestrator | 2025-03-27 01:01:03 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:03.882551 | orchestrator | 2025-03-27 01:01:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:03.882885 | orchestrator | 2025-03-27 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:06.930883 | orchestrator | 2025-03-27 01:01:06 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:06.934814 | orchestrator | 2025-03-27 01:01:06 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:06.937738 | orchestrator | 2025-03-27 01:01:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:06.937914 | orchestrator | 2025-03-27 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:09.992600 | orchestrator | 2025-03-27 01:01:09 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:09.994387 | orchestrator | 2025-03-27 01:01:09 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:09.994463 | orchestrator | 2025-03-27 01:01:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:13.059757 | orchestrator | 2025-03-27 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:13.059888 | orchestrator | 2025-03-27 01:01:13 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:13.061398 | orchestrator | 2025-03-27 01:01:13 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:13.062316 | orchestrator | 2025-03-27 01:01:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:16.106012 | orchestrator | 2025-03-27 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:16.106219 | orchestrator | 2025-03-27 01:01:16 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:16.106521 | orchestrator | 2025-03-27 01:01:16 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:16.107646 | orchestrator | 2025-03-27 01:01:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:19.172762 | orchestrator | 2025-03-27 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:19.172885 | orchestrator | 2025-03-27 01:01:19 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:19.175035 | orchestrator | 2025-03-27 01:01:19 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:19.177165 | orchestrator | 2025-03-27 01:01:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:19.177776 | orchestrator | 2025-03-27 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:22.240336 | orchestrator | 2025-03-27 01:01:22 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:22.243005 | orchestrator | 2025-03-27 01:01:22 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:22.245311 | orchestrator | 2025-03-27 01:01:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:22.245559 | orchestrator | 2025-03-27 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:25.303988 | orchestrator | 2025-03-27 01:01:25 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:25.304360 | orchestrator | 2025-03-27 01:01:25 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:25.305621 | orchestrator | 2025-03-27 01:01:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:28.364628 | orchestrator | 2025-03-27 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:28.364795 | orchestrator | 2025-03-27 01:01:28 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:28.368640 | orchestrator | 2025-03-27 01:01:28 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:28.371576 | orchestrator | 2025-03-27 01:01:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:28.371680 | orchestrator | 2025-03-27 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:31.432378 | orchestrator | 2025-03-27 01:01:31 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:31.434393 | orchestrator | 2025-03-27 01:01:31 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:31.435362 | orchestrator | 2025-03-27 01:01:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:34.485245 | orchestrator | 2025-03-27 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:34.485380 | orchestrator | 2025-03-27 01:01:34 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:34.486555 | orchestrator | 2025-03-27 01:01:34 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:34.488674 | orchestrator | 2025-03-27 01:01:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:37.552112 | orchestrator | 2025-03-27 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:37.552246 | orchestrator | 2025-03-27 01:01:37 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:37.553650 | orchestrator | 2025-03-27 01:01:37 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:37.555685 | orchestrator | 2025-03-27 01:01:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:40.604164 | orchestrator | 2025-03-27 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:40.604324 | orchestrator | 2025-03-27 01:01:40 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:40.604934 | orchestrator | 2025-03-27 01:01:40 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:40.606285 | orchestrator | 2025-03-27 01:01:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:43.660177 | orchestrator | 2025-03-27 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:43.660307 | orchestrator | 2025-03-27 01:01:43 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:43.661016 | orchestrator | 2025-03-27 01:01:43 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:43.662298 | orchestrator | 2025-03-27 01:01:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:43.662878 | orchestrator | 2025-03-27 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:46.711858 | orchestrator | 2025-03-27 01:01:46 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:46.713624 | orchestrator | 2025-03-27 01:01:46 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:46.715180 | orchestrator | 2025-03-27 01:01:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:49.767204 | orchestrator | 2025-03-27 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:49.767331 | orchestrator | 2025-03-27 01:01:49 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:49.769192 | orchestrator | 2025-03-27 01:01:49 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:49.771237 | orchestrator | 2025-03-27 01:01:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:52.825315 | orchestrator | 2025-03-27 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:52.825499 | orchestrator | 2025-03-27 01:01:52 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:52.826780 | orchestrator | 2025-03-27 01:01:52 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state STARTED 2025-03-27 01:01:52.828543 | orchestrator | 2025-03-27 01:01:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:52.828684 | orchestrator | 2025-03-27 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:55.882746 | orchestrator | 2025-03-27 01:01:55 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:55.891856 | orchestrator | 2025-03-27 01:01:55 | INFO  | Task 58c975b8-7000-45bf-b9ac-68840356d7ff is in state SUCCESS 2025-03-27 01:01:55.892252 | orchestrator | 2025-03-27 01:01:55.894878 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 01:01:55.894924 | orchestrator | 2025-03-27 01:01:55.894939 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-03-27 01:01:55.894954 | orchestrator | 2025-03-27 01:01:55.894968 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-27 01:01:55.894982 | orchestrator | Thursday 27 March 2025 00:47:53 +0000 (0:00:02.077) 0:00:02.077 ******** 2025-03-27 01:01:55.894997 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.895012 | orchestrator | 2025-03-27 01:01:55.895026 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-27 01:01:55.895055 | orchestrator | Thursday 27 March 2025 00:47:55 +0000 (0:00:01.681) 0:00:03.758 ******** 2025-03-27 01:01:55.895070 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.895084 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-03-27 01:01:55.895098 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-03-27 01:01:55.895112 | orchestrator | 2025-03-27 01:01:55.895126 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-27 01:01:55.895508 | orchestrator | Thursday 27 March 2025 00:47:56 +0000 (0:00:00.926) 0:00:04.685 ******** 2025-03-27 01:01:55.895534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.895548 | orchestrator | 2025-03-27 01:01:55.895562 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-27 01:01:55.895576 | orchestrator | Thursday 27 March 2025 00:47:58 +0000 (0:00:01.792) 0:00:06.477 ******** 2025-03-27 01:01:55.895612 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.895628 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.895642 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.895656 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.895670 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.895683 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.895697 | orchestrator | 2025-03-27 01:01:55.895711 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-27 01:01:55.895726 | orchestrator | Thursday 27 March 2025 00:48:00 +0000 (0:00:02.092) 0:00:08.570 ******** 2025-03-27 01:01:55.895740 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.895754 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.895767 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.895781 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.895795 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.895808 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.895822 | orchestrator | 2025-03-27 01:01:55.896431 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-27 01:01:55.896478 | orchestrator | Thursday 27 March 2025 00:48:01 +0000 (0:00:01.531) 0:00:10.101 ******** 2025-03-27 01:01:55.896494 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.896508 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.896522 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.896536 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.896586 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.896646 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.896662 | orchestrator | 2025-03-27 01:01:55.896677 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-27 01:01:55.896691 | orchestrator | Thursday 27 March 2025 00:48:03 +0000 (0:00:01.547) 0:00:11.649 ******** 2025-03-27 01:01:55.896705 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.896718 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.896873 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.896889 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.896904 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.896917 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.896931 | orchestrator | 2025-03-27 01:01:55.896945 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-27 01:01:55.896959 | orchestrator | Thursday 27 March 2025 00:48:04 +0000 (0:00:01.382) 0:00:13.032 ******** 2025-03-27 01:01:55.896973 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.896987 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.897001 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.897014 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.897028 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.897043 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.897057 | orchestrator | 2025-03-27 01:01:55.897071 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-27 01:01:55.897746 | orchestrator | Thursday 27 March 2025 00:48:05 +0000 (0:00:01.197) 0:00:14.229 ******** 2025-03-27 01:01:55.897761 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.897775 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.897789 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.897803 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.897817 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.897831 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.897845 | orchestrator | 2025-03-27 01:01:55.897859 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-27 01:01:55.897873 | orchestrator | Thursday 27 March 2025 00:48:07 +0000 (0:00:02.004) 0:00:16.234 ******** 2025-03-27 01:01:55.897887 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.897902 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.897917 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.898660 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.898688 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.898702 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.898732 | orchestrator | 2025-03-27 01:01:55.898746 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-27 01:01:55.898760 | orchestrator | Thursday 27 March 2025 00:48:08 +0000 (0:00:01.199) 0:00:17.434 ******** 2025-03-27 01:01:55.898774 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.898788 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.898801 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.898815 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.898829 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.898842 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.898856 | orchestrator | 2025-03-27 01:01:55.898950 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-27 01:01:55.899891 | orchestrator | Thursday 27 March 2025 00:48:10 +0000 (0:00:01.099) 0:00:18.533 ******** 2025-03-27 01:01:55.899912 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.899928 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.899944 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.899958 | orchestrator | 2025-03-27 01:01:55.899973 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-27 01:01:55.899988 | orchestrator | Thursday 27 March 2025 00:48:10 +0000 (0:00:00.838) 0:00:19.372 ******** 2025-03-27 01:01:55.900003 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.900018 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.900033 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.900048 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.900062 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.900077 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.900091 | orchestrator | 2025-03-27 01:01:55.900106 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-27 01:01:55.900121 | orchestrator | Thursday 27 March 2025 00:48:12 +0000 (0:00:01.758) 0:00:21.130 ******** 2025-03-27 01:01:55.900136 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.900151 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.900166 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.900180 | orchestrator | 2025-03-27 01:01:55.900195 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-27 01:01:55.900210 | orchestrator | Thursday 27 March 2025 00:48:15 +0000 (0:00:03.036) 0:00:24.166 ******** 2025-03-27 01:01:55.900225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.900240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.900255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.900270 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.900285 | orchestrator | 2025-03-27 01:01:55.900299 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-27 01:01:55.900322 | orchestrator | Thursday 27 March 2025 00:48:16 +0000 (0:00:00.753) 0:00:24.920 ******** 2025-03-27 01:01:55.900339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900372 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900399 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.900414 | orchestrator | 2025-03-27 01:01:55.900429 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-27 01:01:55.900506 | orchestrator | Thursday 27 March 2025 00:48:17 +0000 (0:00:01.015) 0:00:25.936 ******** 2025-03-27 01:01:55.900527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900579 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.900595 | orchestrator | 2025-03-27 01:01:55.900612 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-27 01:01:55.900737 | orchestrator | Thursday 27 March 2025 00:48:17 +0000 (0:00:00.355) 0:00:26.291 ******** 2025-03-27 01:01:55.900761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-27 00:48:13.571679', 'end': '2025-03-27 00:48:13.809398', 'delta': '0:00:00.237719', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-27 00:48:14.335124', 'end': '2025-03-27 00:48:14.586700', 'delta': '0:00:00.251576', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900795 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-27 00:48:15.156840', 'end': '2025-03-27 00:48:15.411326', 'delta': '0:00:00.254486', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-27 01:01:55.900820 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.900834 | orchestrator | 2025-03-27 01:01:55.900848 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-27 01:01:55.900861 | orchestrator | Thursday 27 March 2025 00:48:18 +0000 (0:00:00.458) 0:00:26.749 ******** 2025-03-27 01:01:55.900874 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.900888 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.900900 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.900913 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.900926 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.900939 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.900951 | orchestrator | 2025-03-27 01:01:55.900964 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-27 01:01:55.900977 | orchestrator | Thursday 27 March 2025 00:48:21 +0000 (0:00:03.049) 0:00:29.799 ******** 2025-03-27 01:01:55.900989 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.901002 | orchestrator | 2025-03-27 01:01:55.901015 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-27 01:01:55.901027 | orchestrator | Thursday 27 March 2025 00:48:22 +0000 (0:00:00.802) 0:00:30.602 ******** 2025-03-27 01:01:55.901040 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901052 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901065 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901077 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901090 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901102 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901122 | orchestrator | 2025-03-27 01:01:55.901135 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-27 01:01:55.901148 | orchestrator | Thursday 27 March 2025 00:48:23 +0000 (0:00:00.958) 0:00:31.560 ******** 2025-03-27 01:01:55.901160 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901173 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901185 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901198 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901210 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901223 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901235 | orchestrator | 2025-03-27 01:01:55.901248 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:01:55.901261 | orchestrator | Thursday 27 March 2025 00:48:24 +0000 (0:00:01.349) 0:00:32.910 ******** 2025-03-27 01:01:55.901273 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901285 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901298 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901310 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901323 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901335 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901348 | orchestrator | 2025-03-27 01:01:55.901361 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-27 01:01:55.901374 | orchestrator | Thursday 27 March 2025 00:48:25 +0000 (0:00:00.908) 0:00:33.818 ******** 2025-03-27 01:01:55.901469 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901489 | orchestrator | 2025-03-27 01:01:55.901502 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-27 01:01:55.901515 | orchestrator | Thursday 27 March 2025 00:48:25 +0000 (0:00:00.489) 0:00:34.308 ******** 2025-03-27 01:01:55.901528 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901540 | orchestrator | 2025-03-27 01:01:55.901553 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:01:55.901565 | orchestrator | Thursday 27 March 2025 00:48:26 +0000 (0:00:00.358) 0:00:34.666 ******** 2025-03-27 01:01:55.901578 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901591 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901603 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901623 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901636 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901648 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901666 | orchestrator | 2025-03-27 01:01:55.901679 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-27 01:01:55.901692 | orchestrator | Thursday 27 March 2025 00:48:27 +0000 (0:00:00.850) 0:00:35.516 ******** 2025-03-27 01:01:55.901704 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901717 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901730 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901742 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901755 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901767 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901780 | orchestrator | 2025-03-27 01:01:55.901792 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-27 01:01:55.901805 | orchestrator | Thursday 27 March 2025 00:48:28 +0000 (0:00:01.051) 0:00:36.568 ******** 2025-03-27 01:01:55.901818 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901830 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901842 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901855 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901867 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901879 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.901892 | orchestrator | 2025-03-27 01:01:55.901905 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-27 01:01:55.901917 | orchestrator | Thursday 27 March 2025 00:48:29 +0000 (0:00:01.129) 0:00:37.698 ******** 2025-03-27 01:01:55.901930 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.901942 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.901955 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.901967 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.901979 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.901992 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.902004 | orchestrator | 2025-03-27 01:01:55.902070 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-27 01:01:55.902089 | orchestrator | Thursday 27 March 2025 00:48:30 +0000 (0:00:01.245) 0:00:38.943 ******** 2025-03-27 01:01:55.902103 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.902117 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.902132 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.902146 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.902161 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.902175 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.902189 | orchestrator | 2025-03-27 01:01:55.902203 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-27 01:01:55.902218 | orchestrator | Thursday 27 March 2025 00:48:31 +0000 (0:00:01.110) 0:00:40.053 ******** 2025-03-27 01:01:55.902232 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.902246 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.902261 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.902275 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.902289 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.902303 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.902318 | orchestrator | 2025-03-27 01:01:55.902338 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-27 01:01:55.902353 | orchestrator | Thursday 27 March 2025 00:48:32 +0000 (0:00:01.123) 0:00:41.177 ******** 2025-03-27 01:01:55.902367 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.902382 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.902403 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.902420 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.902434 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.902472 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.902485 | orchestrator | 2025-03-27 01:01:55.902498 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-27 01:01:55.902511 | orchestrator | Thursday 27 March 2025 00:48:33 +0000 (0:00:01.179) 0:00:42.356 ******** 2025-03-27 01:01:55.902524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.902899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.902914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbd72eb5-415c-46b6-800c-c9a4152e0b1d', 'scsi-SQEMU_QEMU_HARDDISK_dbd72eb5-415c-46b6-800c-c9a4152e0b1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.902990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c06239b1-1e23-4e3e-9542-3c7768e76fd7', 'scsi-SQEMU_QEMU_HARDDISK_c06239b1-1e23-4e3e-9542-3c7768e76fd7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682', 'scsi-SQEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part1', 'scsi-SQEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part14', 'scsi-SQEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part15', 'scsi-SQEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part16', 'scsi-SQEMU_QEMU_HARDDISK_18887533-b38f-4c9f-bae8-d4f30e6c3682-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c304c21c-7b61-43fc-89e5-88e0ceb08200', 'scsi-SQEMU_QEMU_HARDDISK_c304c21c-7b61-43fc-89e5-88e0ceb08200'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29ec91c5-8d97-4cfd-bce6-384323cd2541', 'scsi-SQEMU_QEMU_HARDDISK_29ec91c5-8d97-4cfd-bce6-384323cd2541'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61228255-bfc1-4c3b-9b0a-267eeef01c9c', 'scsi-SQEMU_QEMU_HARDDISK_61228255-bfc1-4c3b-9b0a-267eeef01c9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77112039-dad3-47d6-9314-c2213ca1fc67', 'scsi-SQEMU_QEMU_HARDDISK_77112039-dad3-47d6-9314-c2213ca1fc67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903212 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.903225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903410 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.903423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85', 'scsi-SQEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3cc3972-7e4e-4695-9af3-1d8e6eae8a85-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50b1bf4c-79f1-4c85-95b4-05ba7fb61d40', 'scsi-SQEMU_QEMU_HARDDISK_50b1bf4c-79f1-4c85-95b4-05ba7fb61d40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e2bf155--ac50--562d--a3fc--a4d9038fe730-osd--block--5e2bf155--ac50--562d--a3fc--a4d9038fe730', 'dm-uuid-LVM-QA8Lq98hT0WrqvFZAwwmxLAQKDG9xLcdqmUJYcccF1Xf9DZu8JzS7iQDk0QMlG2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2bb18ed-1663-4732-9ace-7a8cbf1e5186', 'scsi-SQEMU_QEMU_HARDDISK_f2bb18ed-1663-4732-9ace-7a8cbf1e5186'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfc1b1dd-9bfd-4d32-b01b-91720163ebc8', 'scsi-SQEMU_QEMU_HARDDISK_bfc1b1dd-9bfd-4d32-b01b-91720163ebc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d321ea45--1a00--5698--8092--45c793cb3b8c-osd--block--d321ea45--1a00--5698--8092--45c793cb3b8c', 'dm-uuid-LVM-sgMXA1eJjWzofV27oOT5zNkGmgY2I1NJ2S6grLdwwQsM2iwC23SpFJc8NuP5WZfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903808 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.903823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bac76156--9f65--5e37--8447--16c40269f5cf-osd--block--bac76156--9f65--5e37--8447--16c40269f5cf', 'dm-uuid-LVM-cLquHM6cTtxcfmF0FIJtGaa5SY2WsbrQzYjdnqtOmEzrnBmaHUUcOAuMpvl1kC4q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5e2bf155--ac50--562d--a3fc--a4d9038fe730-osd--block--5e2bf155--ac50--562d--a3fc--a4d9038fe730'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lf6Yge-HAyn-0DtL-eRlI-G2Y8-DOpx-0CFKlG', 'scsi-0QEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9', 'scsi-SQEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.903968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b-osd--block--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b', 'dm-uuid-LVM-acuslDl7ym18pYJdSP1LtxEkeZilUcCsw10HMf7X50fbeph6IfiESSzRGBGbxoce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.903988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d321ea45--1a00--5698--8092--45c793cb3b8c-osd--block--d321ea45--1a00--5698--8092--45c793cb3b8c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHmqh2-AoUZ-coXE-4raU-G2ju-gAl6-S8I80b', 'scsi-0QEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7', 'scsi-SQEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23', 'scsi-SQEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part1', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part14', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part15', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part16', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bac76156--9f65--5e37--8447--16c40269f5cf-osd--block--bac76156--9f65--5e37--8447--16c40269f5cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3HdPK-IFGM-nunJ-uK6V-IehT-ZxL4-QT0Qr2', 'scsi-0QEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8', 'scsi-SQEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b-osd--block--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FNHXoR-F0L1-xpbb-GM1d-Larw-nc1G-0enZLi', 'scsi-0QEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a', 'scsi-SQEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10', 'scsi-SQEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904334 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.904346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904362 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.904373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--923c5540--3b69--54d6--b090--bccde0d698f1-osd--block--923c5540--3b69--54d6--b090--bccde0d698f1', 'dm-uuid-LVM-II044VSc7qX0zAykm1N1e47StvtKMHOQfYefWyYdcT1XJKoLgemSD2EMuRphzNjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8acd0346--cc61--560a--be8a--825f05553edd-osd--block--8acd0346--cc61--560a--be8a--825f05553edd', 'dm-uuid-LVM-byoLOTJpo7zdj83o1Q3TMwLiBG8164KG8yqfFlinH5MBI91EtSnlyxHzZgT9GR14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:01:55.904583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part1', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part14', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part15', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part16', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--923c5540--3b69--54d6--b090--bccde0d698f1-osd--block--923c5540--3b69--54d6--b090--bccde0d698f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFheq2-iyOV-C2Ex-oYS9-FfkI-yCv5-qAnX1b', 'scsi-0QEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac', 'scsi-SQEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8acd0346--cc61--560a--be8a--825f05553edd-osd--block--8acd0346--cc61--560a--be8a--825f05553edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f3Nqpq-QzCM-Ycoj-awYo-cA9E-Eiz4-XTKewp', 'scsi-0QEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d', 'scsi-SQEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8', 'scsi-SQEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:01:55.904700 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.904710 | orchestrator | 2025-03-27 01:01:55.904721 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-27 01:01:55.904731 | orchestrator | Thursday 27 March 2025 00:48:36 +0000 (0:00:02.567) 0:00:44.924 ******** 2025-03-27 01:01:55.904741 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.904751 | orchestrator | 2025-03-27 01:01:55.904761 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-27 01:01:55.904771 | orchestrator | Thursday 27 March 2025 00:48:36 +0000 (0:00:00.483) 0:00:45.408 ******** 2025-03-27 01:01:55.904781 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.904791 | orchestrator | 2025-03-27 01:01:55.904801 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-27 01:01:55.904811 | orchestrator | Thursday 27 March 2025 00:48:37 +0000 (0:00:00.187) 0:00:45.595 ******** 2025-03-27 01:01:55.904821 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.904831 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.904841 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.904851 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.904861 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.904870 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.904880 | orchestrator | 2025-03-27 01:01:55.904890 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-27 01:01:55.904900 | orchestrator | Thursday 27 March 2025 00:48:38 +0000 (0:00:01.074) 0:00:46.670 ******** 2025-03-27 01:01:55.904910 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.904920 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.904945 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.904957 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.904968 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.904978 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.904989 | orchestrator | 2025-03-27 01:01:55.905000 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-27 01:01:55.905019 | orchestrator | Thursday 27 March 2025 00:48:39 +0000 (0:00:01.685) 0:00:48.355 ******** 2025-03-27 01:01:55.905030 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.905040 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.905051 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.905061 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.905071 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.905082 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.905092 | orchestrator | 2025-03-27 01:01:55.905103 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:01:55.905113 | orchestrator | Thursday 27 March 2025 00:48:40 +0000 (0:00:00.794) 0:00:49.149 ******** 2025-03-27 01:01:55.905124 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905134 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905144 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.905155 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.905165 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.905228 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.905243 | orchestrator | 2025-03-27 01:01:55.905254 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:01:55.905265 | orchestrator | Thursday 27 March 2025 00:48:41 +0000 (0:00:01.252) 0:00:50.401 ******** 2025-03-27 01:01:55.905275 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905286 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905296 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.905306 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.905316 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.905326 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.905336 | orchestrator | 2025-03-27 01:01:55.905347 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:01:55.905357 | orchestrator | Thursday 27 March 2025 00:48:42 +0000 (0:00:00.944) 0:00:51.345 ******** 2025-03-27 01:01:55.905367 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905377 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905387 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.905397 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.905407 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.905417 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.905427 | orchestrator | 2025-03-27 01:01:55.905437 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:01:55.905464 | orchestrator | Thursday 27 March 2025 00:48:44 +0000 (0:00:01.753) 0:00:53.099 ******** 2025-03-27 01:01:55.905474 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905484 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905499 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.905509 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.905519 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.905529 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.905539 | orchestrator | 2025-03-27 01:01:55.905549 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-27 01:01:55.905559 | orchestrator | Thursday 27 March 2025 00:48:45 +0000 (0:00:01.077) 0:00:54.176 ******** 2025-03-27 01:01:55.905569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.905579 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-27 01:01:55.905589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-27 01:01:55.905599 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-27 01:01:55.905609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:01:55.905619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-27 01:01:55.905629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:01:55.905639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.905652 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-27 01:01:55.905669 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.905680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-27 01:01:55.905689 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:01:55.905709 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.905719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.905729 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905739 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:01:55.905749 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:01:55.905759 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:01:55.905769 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:01:55.905779 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:01:55.905789 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.905799 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:01:55.905808 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.905818 | orchestrator | 2025-03-27 01:01:55.905828 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-27 01:01:55.905838 | orchestrator | Thursday 27 March 2025 00:48:48 +0000 (0:00:03.094) 0:00:57.271 ******** 2025-03-27 01:01:55.905848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.905858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.905871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-27 01:01:55.905883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.905895 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.905907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-27 01:01:55.905919 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-27 01:01:55.905930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:01:55.905941 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-27 01:01:55.905953 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-27 01:01:55.905964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:01:55.905975 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.905987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:01:55.905999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:01:55.906010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-27 01:01:55.906046 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.906058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:01:55.906069 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.906081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:01:55.906093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:01:55.906160 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.906176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:01:55.906188 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:01:55.906199 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.906210 | orchestrator | 2025-03-27 01:01:55.906222 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-27 01:01:55.906234 | orchestrator | Thursday 27 March 2025 00:48:52 +0000 (0:00:03.561) 0:01:00.833 ******** 2025-03-27 01:01:55.906245 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.906255 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-03-27 01:01:55.906265 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-03-27 01:01:55.906281 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-27 01:01:55.906291 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-03-27 01:01:55.906301 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-03-27 01:01:55.906311 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-03-27 01:01:55.906321 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-03-27 01:01:55.906331 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-03-27 01:01:55.906340 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-27 01:01:55.906351 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-03-27 01:01:55.906360 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-03-27 01:01:55.906370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-03-27 01:01:55.906380 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-03-27 01:01:55.906390 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-03-27 01:01:55.906400 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-03-27 01:01:55.906410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-03-27 01:01:55.906420 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-03-27 01:01:55.906429 | orchestrator | 2025-03-27 01:01:55.906483 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-27 01:01:55.906495 | orchestrator | Thursday 27 March 2025 00:49:00 +0000 (0:00:07.782) 0:01:08.615 ******** 2025-03-27 01:01:55.906505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.906516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.906526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.906536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-27 01:01:55.906546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-27 01:01:55.906556 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-27 01:01:55.906566 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.906576 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-27 01:01:55.906586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-27 01:01:55.906596 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-27 01:01:55.906606 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.906621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:01:55.906631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:01:55.906655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:01:55.906666 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.906676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:01:55.906686 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:01:55.906696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:01:55.906706 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.906716 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.906726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:01:55.906736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:01:55.906746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:01:55.906756 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.906766 | orchestrator | 2025-03-27 01:01:55.906776 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-27 01:01:55.906786 | orchestrator | Thursday 27 March 2025 00:49:02 +0000 (0:00:02.439) 0:01:11.054 ******** 2025-03-27 01:01:55.906796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.906810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.906826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.906837 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.906847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-27 01:01:55.906857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-27 01:01:55.906867 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-27 01:01:55.906877 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-27 01:01:55.906887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-27 01:01:55.906897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-27 01:01:55.906907 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.906917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:01:55.906927 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.906938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:01:55.906948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:01:55.906958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:01:55.907025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:01:55.907038 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:01:55.907056 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:01:55.907074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:01:55.907083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:01:55.907092 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907100 | orchestrator | 2025-03-27 01:01:55.907109 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-27 01:01:55.907118 | orchestrator | Thursday 27 March 2025 00:49:03 +0000 (0:00:01.002) 0:01:12.056 ******** 2025-03-27 01:01:55.907127 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-03-27 01:01:55.907136 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:01:55.907145 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:01:55.907154 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:01:55.907163 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-03-27 01:01:55.907172 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:01:55.907181 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:01:55.907189 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:01:55.907198 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-03-27 01:01:55.907207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:01:55.907216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:01:55.907225 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:01:55.907233 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:01:55.907242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:01:55.907251 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:01:55.907260 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907274 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:01:55.907292 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:01:55.907301 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:01:55.907309 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907318 | orchestrator | 2025-03-27 01:01:55.907327 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-27 01:01:55.907336 | orchestrator | Thursday 27 March 2025 00:49:04 +0000 (0:00:01.206) 0:01:13.263 ******** 2025-03-27 01:01:55.907345 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.907353 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.907362 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.907371 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.907380 | orchestrator | 2025-03-27 01:01:55.907389 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.907398 | orchestrator | Thursday 27 March 2025 00:49:06 +0000 (0:00:01.427) 0:01:14.691 ******** 2025-03-27 01:01:55.907406 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907415 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907424 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907433 | orchestrator | 2025-03-27 01:01:55.907452 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.907461 | orchestrator | Thursday 27 March 2025 00:49:07 +0000 (0:00:01.070) 0:01:15.761 ******** 2025-03-27 01:01:55.907470 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907478 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907487 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907495 | orchestrator | 2025-03-27 01:01:55.907504 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.907512 | orchestrator | Thursday 27 March 2025 00:49:07 +0000 (0:00:00.673) 0:01:16.435 ******** 2025-03-27 01:01:55.907521 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907529 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907538 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907546 | orchestrator | 2025-03-27 01:01:55.907555 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.907563 | orchestrator | Thursday 27 March 2025 00:49:08 +0000 (0:00:00.562) 0:01:16.997 ******** 2025-03-27 01:01:55.907572 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.907580 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.907588 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.907597 | orchestrator | 2025-03-27 01:01:55.907605 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.907655 | orchestrator | Thursday 27 March 2025 00:49:09 +0000 (0:00:00.864) 0:01:17.862 ******** 2025-03-27 01:01:55.907667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.907675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.907684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.907693 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907701 | orchestrator | 2025-03-27 01:01:55.907710 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.907718 | orchestrator | Thursday 27 March 2025 00:49:10 +0000 (0:00:00.795) 0:01:18.658 ******** 2025-03-27 01:01:55.907727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.907735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.907744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.907757 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907766 | orchestrator | 2025-03-27 01:01:55.907774 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.907783 | orchestrator | Thursday 27 March 2025 00:49:10 +0000 (0:00:00.683) 0:01:19.341 ******** 2025-03-27 01:01:55.907791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.907800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.907808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.907817 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907830 | orchestrator | 2025-03-27 01:01:55.907839 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.907847 | orchestrator | Thursday 27 March 2025 00:49:12 +0000 (0:00:01.302) 0:01:20.643 ******** 2025-03-27 01:01:55.907855 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.907864 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.907876 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.907885 | orchestrator | 2025-03-27 01:01:55.907893 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.907902 | orchestrator | Thursday 27 March 2025 00:49:12 +0000 (0:00:00.608) 0:01:21.252 ******** 2025-03-27 01:01:55.907910 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-27 01:01:55.907919 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-27 01:01:55.907928 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-27 01:01:55.907936 | orchestrator | 2025-03-27 01:01:55.907945 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.907953 | orchestrator | Thursday 27 March 2025 00:49:14 +0000 (0:00:01.863) 0:01:23.115 ******** 2025-03-27 01:01:55.907961 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.907970 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.907978 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.907987 | orchestrator | 2025-03-27 01:01:55.907995 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.908004 | orchestrator | Thursday 27 March 2025 00:49:15 +0000 (0:00:00.767) 0:01:23.883 ******** 2025-03-27 01:01:55.908012 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908021 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908029 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908038 | orchestrator | 2025-03-27 01:01:55.908046 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.908055 | orchestrator | Thursday 27 March 2025 00:49:16 +0000 (0:00:01.034) 0:01:24.918 ******** 2025-03-27 01:01:55.908063 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.908072 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908092 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.908102 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908111 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.908120 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908129 | orchestrator | 2025-03-27 01:01:55.908138 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.908146 | orchestrator | Thursday 27 March 2025 00:49:17 +0000 (0:00:01.115) 0:01:26.034 ******** 2025-03-27 01:01:55.908155 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.908164 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908173 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.908182 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908191 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.908204 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908213 | orchestrator | 2025-03-27 01:01:55.908225 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.908235 | orchestrator | Thursday 27 March 2025 00:49:18 +0000 (0:00:00.861) 0:01:26.895 ******** 2025-03-27 01:01:55.908243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.908252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.908261 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.908270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.908279 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908288 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.908298 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.908309 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.908321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.908331 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.908394 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908405 | orchestrator | 2025-03-27 01:01:55.908415 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-27 01:01:55.908427 | orchestrator | Thursday 27 March 2025 00:49:19 +0000 (0:00:01.018) 0:01:27.914 ******** 2025-03-27 01:01:55.908437 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.908462 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.908473 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.908483 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908493 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908503 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908513 | orchestrator | 2025-03-27 01:01:55.908523 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-27 01:01:55.908532 | orchestrator | Thursday 27 March 2025 00:49:21 +0000 (0:00:01.546) 0:01:29.461 ******** 2025-03-27 01:01:55.908542 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.908552 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.908562 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.908572 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-27 01:01:55.908582 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:01:55.908591 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:01:55.908601 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:01:55.908611 | orchestrator | 2025-03-27 01:01:55.908621 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-27 01:01:55.908631 | orchestrator | Thursday 27 March 2025 00:49:21 +0000 (0:00:00.919) 0:01:30.381 ******** 2025-03-27 01:01:55.908641 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.908651 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.908661 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.908671 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-27 01:01:55.908680 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:01:55.908688 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:01:55.908697 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:01:55.908714 | orchestrator | 2025-03-27 01:01:55.908723 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.908732 | orchestrator | Thursday 27 March 2025 00:49:24 +0000 (0:00:02.360) 0:01:32.741 ******** 2025-03-27 01:01:55.908741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.908751 | orchestrator | 2025-03-27 01:01:55.908759 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.908767 | orchestrator | Thursday 27 March 2025 00:49:25 +0000 (0:00:01.513) 0:01:34.255 ******** 2025-03-27 01:01:55.908776 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.908784 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.908793 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.908801 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.908810 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.908819 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.908827 | orchestrator | 2025-03-27 01:01:55.908835 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.908844 | orchestrator | Thursday 27 March 2025 00:49:27 +0000 (0:00:01.514) 0:01:35.769 ******** 2025-03-27 01:01:55.908852 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.908861 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.908869 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.908878 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.908886 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.908894 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.908903 | orchestrator | 2025-03-27 01:01:55.908911 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.908920 | orchestrator | Thursday 27 March 2025 00:49:28 +0000 (0:00:01.402) 0:01:37.172 ******** 2025-03-27 01:01:55.908928 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.908937 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.908945 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.908954 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.908962 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.908970 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.908979 | orchestrator | 2025-03-27 01:01:55.908988 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.908996 | orchestrator | Thursday 27 March 2025 00:49:30 +0000 (0:00:01.606) 0:01:38.779 ******** 2025-03-27 01:01:55.909004 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909013 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909021 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909030 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.909038 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.909046 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.909055 | orchestrator | 2025-03-27 01:01:55.909103 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.909114 | orchestrator | Thursday 27 March 2025 00:49:31 +0000 (0:00:01.604) 0:01:40.384 ******** 2025-03-27 01:01:55.909134 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.909144 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.909209 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909221 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909230 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.909239 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909247 | orchestrator | 2025-03-27 01:01:55.909256 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.909264 | orchestrator | Thursday 27 March 2025 00:49:33 +0000 (0:00:01.323) 0:01:41.707 ******** 2025-03-27 01:01:55.909273 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909281 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909289 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909304 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909312 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909321 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909329 | orchestrator | 2025-03-27 01:01:55.909338 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.909346 | orchestrator | Thursday 27 March 2025 00:49:34 +0000 (0:00:01.323) 0:01:43.030 ******** 2025-03-27 01:01:55.909355 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909363 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909371 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909380 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909388 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909396 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909405 | orchestrator | 2025-03-27 01:01:55.909413 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.909477 | orchestrator | Thursday 27 March 2025 00:49:35 +0000 (0:00:00.781) 0:01:43.812 ******** 2025-03-27 01:01:55.909491 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909499 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909508 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909516 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909524 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909533 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909571 | orchestrator | 2025-03-27 01:01:55.909581 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.909589 | orchestrator | Thursday 27 March 2025 00:49:36 +0000 (0:00:01.261) 0:01:45.074 ******** 2025-03-27 01:01:55.909598 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909607 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909615 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909624 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909632 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909640 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909649 | orchestrator | 2025-03-27 01:01:55.909657 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.909666 | orchestrator | Thursday 27 March 2025 00:49:37 +0000 (0:00:00.890) 0:01:45.964 ******** 2025-03-27 01:01:55.909674 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909682 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909691 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909699 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909708 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909716 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909725 | orchestrator | 2025-03-27 01:01:55.909733 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.909742 | orchestrator | Thursday 27 March 2025 00:49:38 +0000 (0:00:00.900) 0:01:46.864 ******** 2025-03-27 01:01:55.909750 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.909759 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.909767 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.909776 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.909785 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.909812 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.909822 | orchestrator | 2025-03-27 01:01:55.909830 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.909839 | orchestrator | Thursday 27 March 2025 00:49:39 +0000 (0:00:01.122) 0:01:47.987 ******** 2025-03-27 01:01:55.909847 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.909869 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.909879 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.909889 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.909898 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.909908 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.909924 | orchestrator | 2025-03-27 01:01:55.909942 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.909952 | orchestrator | Thursday 27 March 2025 00:49:40 +0000 (0:00:00.892) 0:01:48.879 ******** 2025-03-27 01:01:55.909974 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.909984 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.909994 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.910042 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910054 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910063 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910078 | orchestrator | 2025-03-27 01:01:55.910087 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.910097 | orchestrator | Thursday 27 March 2025 00:49:41 +0000 (0:00:00.713) 0:01:49.593 ******** 2025-03-27 01:01:55.910106 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910115 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910124 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910133 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.910143 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.910151 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.910160 | orchestrator | 2025-03-27 01:01:55.910170 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.910179 | orchestrator | Thursday 27 March 2025 00:49:42 +0000 (0:00:00.934) 0:01:50.528 ******** 2025-03-27 01:01:55.910188 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910197 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910205 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910213 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.910221 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.910229 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.910237 | orchestrator | 2025-03-27 01:01:55.910244 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.910305 | orchestrator | Thursday 27 March 2025 00:49:42 +0000 (0:00:00.670) 0:01:51.199 ******** 2025-03-27 01:01:55.910318 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910326 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910334 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910342 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.910350 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.910358 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.910366 | orchestrator | 2025-03-27 01:01:55.910374 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.910382 | orchestrator | Thursday 27 March 2025 00:49:43 +0000 (0:00:00.919) 0:01:52.119 ******** 2025-03-27 01:01:55.910390 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910398 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910406 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910414 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910422 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910430 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910438 | orchestrator | 2025-03-27 01:01:55.910458 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.910466 | orchestrator | Thursday 27 March 2025 00:49:44 +0000 (0:00:00.677) 0:01:52.796 ******** 2025-03-27 01:01:55.910474 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910482 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910490 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910497 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910505 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910513 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910521 | orchestrator | 2025-03-27 01:01:55.910529 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.910537 | orchestrator | Thursday 27 March 2025 00:49:45 +0000 (0:00:01.043) 0:01:53.840 ******** 2025-03-27 01:01:55.910550 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.910558 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.910566 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.910574 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910582 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910590 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910597 | orchestrator | 2025-03-27 01:01:55.910605 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.910613 | orchestrator | Thursday 27 March 2025 00:49:46 +0000 (0:00:00.948) 0:01:54.789 ******** 2025-03-27 01:01:55.910621 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.910629 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.910636 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.910644 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.910652 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.910659 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.910667 | orchestrator | 2025-03-27 01:01:55.910675 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.910687 | orchestrator | Thursday 27 March 2025 00:49:47 +0000 (0:00:00.935) 0:01:55.725 ******** 2025-03-27 01:01:55.910695 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910703 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910711 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910718 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910726 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910734 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910742 | orchestrator | 2025-03-27 01:01:55.910749 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.910757 | orchestrator | Thursday 27 March 2025 00:49:47 +0000 (0:00:00.666) 0:01:56.391 ******** 2025-03-27 01:01:55.910765 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910773 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910784 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910792 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910800 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910808 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910816 | orchestrator | 2025-03-27 01:01:55.910823 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.910831 | orchestrator | Thursday 27 March 2025 00:49:48 +0000 (0:00:00.967) 0:01:57.359 ******** 2025-03-27 01:01:55.910839 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910847 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910855 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910862 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910870 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910878 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910885 | orchestrator | 2025-03-27 01:01:55.910893 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.910901 | orchestrator | Thursday 27 March 2025 00:49:49 +0000 (0:00:00.720) 0:01:58.079 ******** 2025-03-27 01:01:55.910909 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910917 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.910924 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.910932 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.910942 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.910951 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.910960 | orchestrator | 2025-03-27 01:01:55.910969 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.910977 | orchestrator | Thursday 27 March 2025 00:49:50 +0000 (0:00:00.965) 0:01:59.045 ******** 2025-03-27 01:01:55.910986 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.910995 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911004 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911013 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911026 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911035 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911044 | orchestrator | 2025-03-27 01:01:55.911053 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.911062 | orchestrator | Thursday 27 March 2025 00:49:51 +0000 (0:00:00.700) 0:01:59.745 ******** 2025-03-27 01:01:55.911070 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911079 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911087 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911096 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911105 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911114 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911123 | orchestrator | 2025-03-27 01:01:55.911178 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.911191 | orchestrator | Thursday 27 March 2025 00:49:52 +0000 (0:00:00.852) 0:02:00.597 ******** 2025-03-27 01:01:55.911200 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911210 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911219 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911228 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911238 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911247 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911256 | orchestrator | 2025-03-27 01:01:55.911265 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.911274 | orchestrator | Thursday 27 March 2025 00:49:52 +0000 (0:00:00.749) 0:02:01.347 ******** 2025-03-27 01:01:55.911284 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911293 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911302 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911309 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911317 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911325 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911333 | orchestrator | 2025-03-27 01:01:55.911341 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.911350 | orchestrator | Thursday 27 March 2025 00:49:53 +0000 (0:00:00.979) 0:02:02.326 ******** 2025-03-27 01:01:55.911358 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911366 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911373 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911381 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911389 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911397 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911405 | orchestrator | 2025-03-27 01:01:55.911413 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.911421 | orchestrator | Thursday 27 March 2025 00:49:54 +0000 (0:00:00.843) 0:02:03.170 ******** 2025-03-27 01:01:55.911429 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911436 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911478 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911487 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911494 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911507 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911515 | orchestrator | 2025-03-27 01:01:55.911523 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.911531 | orchestrator | Thursday 27 March 2025 00:49:55 +0000 (0:00:01.044) 0:02:04.215 ******** 2025-03-27 01:01:55.911539 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911547 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911554 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911562 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911570 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911583 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911591 | orchestrator | 2025-03-27 01:01:55.911599 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.911607 | orchestrator | Thursday 27 March 2025 00:49:56 +0000 (0:00:00.798) 0:02:05.013 ******** 2025-03-27 01:01:55.911614 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911622 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911630 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911638 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911645 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911653 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911661 | orchestrator | 2025-03-27 01:01:55.911669 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.911677 | orchestrator | Thursday 27 March 2025 00:49:58 +0000 (0:00:01.614) 0:02:06.628 ******** 2025-03-27 01:01:55.911696 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.911704 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.911712 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.911720 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.911728 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911736 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.911744 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.911751 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911759 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.911767 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.911775 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.911782 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.911790 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.911798 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.911806 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.911813 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.911825 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.911833 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.911841 | orchestrator | 2025-03-27 01:01:55.911849 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.911857 | orchestrator | Thursday 27 March 2025 00:49:59 +0000 (0:00:00.815) 0:02:07.443 ******** 2025-03-27 01:01:55.911865 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-27 01:01:55.911872 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-27 01:01:55.911880 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-27 01:01:55.911888 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.911896 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-27 01:01:55.911904 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-27 01:01:55.911912 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-27 01:01:55.911920 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.911974 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-27 01:01:55.911987 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-27 01:01:55.911995 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912004 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-27 01:01:55.912012 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-27 01:01:55.912019 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912026 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912033 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-27 01:01:55.912041 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-27 01:01:55.912048 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912060 | orchestrator | 2025-03-27 01:01:55.912067 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.912074 | orchestrator | Thursday 27 March 2025 00:50:00 +0000 (0:00:01.167) 0:02:08.610 ******** 2025-03-27 01:01:55.912081 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912089 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912096 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912103 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912110 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912117 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912124 | orchestrator | 2025-03-27 01:01:55.912131 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.912138 | orchestrator | Thursday 27 March 2025 00:50:00 +0000 (0:00:00.794) 0:02:09.405 ******** 2025-03-27 01:01:55.912145 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912152 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912159 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912166 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912174 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912181 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912188 | orchestrator | 2025-03-27 01:01:55.912195 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.912202 | orchestrator | Thursday 27 March 2025 00:50:01 +0000 (0:00:00.974) 0:02:10.380 ******** 2025-03-27 01:01:55.912209 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912216 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912223 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912231 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912238 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912245 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912252 | orchestrator | 2025-03-27 01:01:55.912259 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.912266 | orchestrator | Thursday 27 March 2025 00:50:02 +0000 (0:00:00.677) 0:02:11.058 ******** 2025-03-27 01:01:55.912273 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912280 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912287 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912295 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912302 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912309 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912316 | orchestrator | 2025-03-27 01:01:55.912323 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.912330 | orchestrator | Thursday 27 March 2025 00:50:03 +0000 (0:00:00.931) 0:02:11.989 ******** 2025-03-27 01:01:55.912337 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912344 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912355 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912362 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912369 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912376 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912383 | orchestrator | 2025-03-27 01:01:55.912393 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.912400 | orchestrator | Thursday 27 March 2025 00:50:04 +0000 (0:00:00.760) 0:02:12.750 ******** 2025-03-27 01:01:55.912407 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912414 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912421 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912427 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912434 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912452 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912459 | orchestrator | 2025-03-27 01:01:55.912466 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.912480 | orchestrator | Thursday 27 March 2025 00:50:05 +0000 (0:00:00.915) 0:02:13.665 ******** 2025-03-27 01:01:55.912486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.912493 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.912500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.912507 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912514 | orchestrator | 2025-03-27 01:01:55.912521 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.912528 | orchestrator | Thursday 27 March 2025 00:50:05 +0000 (0:00:00.488) 0:02:14.154 ******** 2025-03-27 01:01:55.912535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.912542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.912549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.912573 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912581 | orchestrator | 2025-03-27 01:01:55.912588 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.912594 | orchestrator | Thursday 27 March 2025 00:50:06 +0000 (0:00:00.503) 0:02:14.657 ******** 2025-03-27 01:01:55.912601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.912608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.912615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.912662 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912672 | orchestrator | 2025-03-27 01:01:55.912680 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.912687 | orchestrator | Thursday 27 March 2025 00:50:06 +0000 (0:00:00.430) 0:02:15.088 ******** 2025-03-27 01:01:55.912694 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912702 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912709 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912716 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912723 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912730 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912737 | orchestrator | 2025-03-27 01:01:55.912744 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.912752 | orchestrator | Thursday 27 March 2025 00:50:07 +0000 (0:00:00.752) 0:02:15.841 ******** 2025-03-27 01:01:55.912759 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.912766 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912773 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.912780 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912787 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.912794 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912801 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.912808 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912815 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.912822 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912830 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.912837 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912844 | orchestrator | 2025-03-27 01:01:55.912851 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.912858 | orchestrator | Thursday 27 March 2025 00:50:08 +0000 (0:00:01.200) 0:02:17.041 ******** 2025-03-27 01:01:55.912865 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912872 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912879 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912886 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912893 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912900 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912912 | orchestrator | 2025-03-27 01:01:55.912919 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.912926 | orchestrator | Thursday 27 March 2025 00:50:09 +0000 (0:00:00.890) 0:02:17.932 ******** 2025-03-27 01:01:55.912933 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.912941 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.912948 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.912955 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.912962 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.912969 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.912976 | orchestrator | 2025-03-27 01:01:55.912983 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.912990 | orchestrator | Thursday 27 March 2025 00:50:10 +0000 (0:00:00.654) 0:02:18.586 ******** 2025-03-27 01:01:55.912998 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.913005 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913012 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.913019 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913026 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.913033 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913041 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.913048 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913055 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.913062 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913069 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.913076 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913083 | orchestrator | 2025-03-27 01:01:55.913090 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.913097 | orchestrator | Thursday 27 March 2025 00:50:11 +0000 (0:00:01.420) 0:02:20.006 ******** 2025-03-27 01:01:55.913104 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913112 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913119 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913126 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.913133 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913144 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.913151 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913158 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.913166 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913173 | orchestrator | 2025-03-27 01:01:55.913180 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.913187 | orchestrator | Thursday 27 March 2025 00:50:12 +0000 (0:00:00.726) 0:02:20.732 ******** 2025-03-27 01:01:55.913194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.913211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.913219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.913226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 01:01:55.913233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 01:01:55.913240 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 01:01:55.913247 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913258 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 01:01:55.913303 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 01:01:55.913313 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 01:01:55.913325 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.913339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.913346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.913353 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.913371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.913377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.913384 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913392 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.913406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.913412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.913419 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913426 | orchestrator | 2025-03-27 01:01:55.913433 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.913473 | orchestrator | Thursday 27 March 2025 00:50:14 +0000 (0:00:01.797) 0:02:22.530 ******** 2025-03-27 01:01:55.913482 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913489 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913495 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913502 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913509 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913516 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913523 | orchestrator | 2025-03-27 01:01:55.913533 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.913540 | orchestrator | Thursday 27 March 2025 00:50:15 +0000 (0:00:01.356) 0:02:23.887 ******** 2025-03-27 01:01:55.913547 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913554 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913560 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913567 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.913574 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913581 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.913588 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913594 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.913601 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913608 | orchestrator | 2025-03-27 01:01:55.913615 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.913622 | orchestrator | Thursday 27 March 2025 00:50:16 +0000 (0:00:01.488) 0:02:25.375 ******** 2025-03-27 01:01:55.913629 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913636 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913642 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913649 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913656 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913663 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913670 | orchestrator | 2025-03-27 01:01:55.913677 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.913684 | orchestrator | Thursday 27 March 2025 00:50:18 +0000 (0:00:01.604) 0:02:26.980 ******** 2025-03-27 01:01:55.913691 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913698 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913704 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913711 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913718 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913725 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913732 | orchestrator | 2025-03-27 01:01:55.913747 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-03-27 01:01:55.913754 | orchestrator | Thursday 27 March 2025 00:50:20 +0000 (0:00:01.586) 0:02:28.566 ******** 2025-03-27 01:01:55.913761 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.913767 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.913774 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.913781 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.913787 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.913794 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.913801 | orchestrator | 2025-03-27 01:01:55.913811 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-03-27 01:01:55.913818 | orchestrator | Thursday 27 March 2025 00:50:21 +0000 (0:00:01.835) 0:02:30.402 ******** 2025-03-27 01:01:55.913825 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.913831 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.913838 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.913845 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.913852 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.913859 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.913865 | orchestrator | 2025-03-27 01:01:55.913872 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-03-27 01:01:55.913879 | orchestrator | Thursday 27 March 2025 00:50:25 +0000 (0:00:03.808) 0:02:34.211 ******** 2025-03-27 01:01:55.913886 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.913894 | orchestrator | 2025-03-27 01:01:55.913900 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-03-27 01:01:55.913907 | orchestrator | Thursday 27 March 2025 00:50:27 +0000 (0:00:01.319) 0:02:35.530 ******** 2025-03-27 01:01:55.913914 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.913921 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.913927 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.913934 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.913941 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.913950 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.913958 | orchestrator | 2025-03-27 01:01:55.914006 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-03-27 01:01:55.914031 | orchestrator | Thursday 27 March 2025 00:50:28 +0000 (0:00:00.937) 0:02:36.468 ******** 2025-03-27 01:01:55.914040 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914047 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914054 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914060 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914068 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914078 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914085 | orchestrator | 2025-03-27 01:01:55.914092 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-03-27 01:01:55.914099 | orchestrator | Thursday 27 March 2025 00:50:28 +0000 (0:00:00.664) 0:02:37.133 ******** 2025-03-27 01:01:55.914107 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914113 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914120 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914127 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914134 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914142 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914148 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914160 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914167 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-03-27 01:01:55.914173 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914180 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914187 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-03-27 01:01:55.914194 | orchestrator | 2025-03-27 01:01:55.914201 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-03-27 01:01:55.914208 | orchestrator | Thursday 27 March 2025 00:50:30 +0000 (0:00:01.731) 0:02:38.864 ******** 2025-03-27 01:01:55.914215 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.914223 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.914229 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.914236 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.914243 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.914250 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.914257 | orchestrator | 2025-03-27 01:01:55.914264 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-03-27 01:01:55.914271 | orchestrator | Thursday 27 March 2025 00:50:31 +0000 (0:00:01.263) 0:02:40.128 ******** 2025-03-27 01:01:55.914279 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914286 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914293 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914299 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914305 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914311 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914318 | orchestrator | 2025-03-27 01:01:55.914324 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-03-27 01:01:55.914330 | orchestrator | Thursday 27 March 2025 00:50:32 +0000 (0:00:01.013) 0:02:41.142 ******** 2025-03-27 01:01:55.914336 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914342 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914348 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914354 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914360 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914366 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914372 | orchestrator | 2025-03-27 01:01:55.914378 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-03-27 01:01:55.914384 | orchestrator | Thursday 27 March 2025 00:50:33 +0000 (0:00:00.823) 0:02:41.965 ******** 2025-03-27 01:01:55.914391 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.914397 | orchestrator | 2025-03-27 01:01:55.914403 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-03-27 01:01:55.914409 | orchestrator | Thursday 27 March 2025 00:50:35 +0000 (0:00:01.536) 0:02:43.502 ******** 2025-03-27 01:01:55.914415 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.914421 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.914427 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.914456 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.914463 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.914469 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.914475 | orchestrator | 2025-03-27 01:01:55.914484 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-03-27 01:01:55.914490 | orchestrator | Thursday 27 March 2025 00:51:03 +0000 (0:00:28.919) 0:03:12.421 ******** 2025-03-27 01:01:55.914496 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914502 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914512 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914518 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914524 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914530 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914575 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914584 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914591 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914597 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914603 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914609 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914616 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914622 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914628 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914634 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914641 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914647 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914653 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914660 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914666 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-03-27 01:01:55.914672 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-03-27 01:01:55.914678 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-03-27 01:01:55.914685 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914691 | orchestrator | 2025-03-27 01:01:55.914697 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-03-27 01:01:55.914704 | orchestrator | Thursday 27 March 2025 00:51:05 +0000 (0:00:01.132) 0:03:13.553 ******** 2025-03-27 01:01:55.914710 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914716 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914723 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914729 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914735 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914741 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914747 | orchestrator | 2025-03-27 01:01:55.914754 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-03-27 01:01:55.914760 | orchestrator | Thursday 27 March 2025 00:51:06 +0000 (0:00:01.176) 0:03:14.730 ******** 2025-03-27 01:01:55.914766 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914772 | orchestrator | 2025-03-27 01:01:55.914779 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-03-27 01:01:55.914785 | orchestrator | Thursday 27 March 2025 00:51:06 +0000 (0:00:00.219) 0:03:14.949 ******** 2025-03-27 01:01:55.914791 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914797 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914803 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914810 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914816 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914822 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914828 | orchestrator | 2025-03-27 01:01:55.914834 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-03-27 01:01:55.914841 | orchestrator | Thursday 27 March 2025 00:51:07 +0000 (0:00:01.063) 0:03:16.013 ******** 2025-03-27 01:01:55.914847 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914860 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914866 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914873 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914879 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914885 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914891 | orchestrator | 2025-03-27 01:01:55.914897 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-03-27 01:01:55.914903 | orchestrator | Thursday 27 March 2025 00:51:08 +0000 (0:00:01.259) 0:03:17.272 ******** 2025-03-27 01:01:55.914910 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.914919 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.914926 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.914932 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.914938 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.914944 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.914950 | orchestrator | 2025-03-27 01:01:55.914957 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-03-27 01:01:55.914966 | orchestrator | Thursday 27 March 2025 00:51:09 +0000 (0:00:00.856) 0:03:18.129 ******** 2025-03-27 01:01:55.914972 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.914979 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.914985 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.914991 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.914997 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.915004 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.915010 | orchestrator | 2025-03-27 01:01:55.915016 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-03-27 01:01:55.915022 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:03.345) 0:03:21.474 ******** 2025-03-27 01:01:55.915029 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.915035 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.915041 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.915047 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.915054 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.915060 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.915066 | orchestrator | 2025-03-27 01:01:55.915072 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-03-27 01:01:55.915078 | orchestrator | Thursday 27 March 2025 00:51:13 +0000 (0:00:00.701) 0:03:22.175 ******** 2025-03-27 01:01:55.915085 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.915092 | orchestrator | 2025-03-27 01:01:55.915131 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-03-27 01:01:55.915141 | orchestrator | Thursday 27 March 2025 00:51:15 +0000 (0:00:01.499) 0:03:23.674 ******** 2025-03-27 01:01:55.915147 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915154 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915160 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915166 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915173 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915179 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915185 | orchestrator | 2025-03-27 01:01:55.915191 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-03-27 01:01:55.915197 | orchestrator | Thursday 27 March 2025 00:51:16 +0000 (0:00:01.033) 0:03:24.708 ******** 2025-03-27 01:01:55.915204 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915210 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915216 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915222 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915229 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915235 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915241 | orchestrator | 2025-03-27 01:01:55.915247 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-03-27 01:01:55.915257 | orchestrator | Thursday 27 March 2025 00:51:17 +0000 (0:00:00.768) 0:03:25.477 ******** 2025-03-27 01:01:55.915264 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915270 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915276 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915282 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915289 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915295 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915301 | orchestrator | 2025-03-27 01:01:55.915307 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-03-27 01:01:55.915314 | orchestrator | Thursday 27 March 2025 00:51:18 +0000 (0:00:01.152) 0:03:26.629 ******** 2025-03-27 01:01:55.915320 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915326 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915332 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915338 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915345 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915351 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915357 | orchestrator | 2025-03-27 01:01:55.915363 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-03-27 01:01:55.915370 | orchestrator | Thursday 27 March 2025 00:51:18 +0000 (0:00:00.804) 0:03:27.434 ******** 2025-03-27 01:01:55.915376 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915382 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915388 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915394 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915400 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915407 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915413 | orchestrator | 2025-03-27 01:01:55.915419 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-03-27 01:01:55.915425 | orchestrator | Thursday 27 March 2025 00:51:20 +0000 (0:00:01.080) 0:03:28.515 ******** 2025-03-27 01:01:55.915431 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915438 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915477 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915483 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915493 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915499 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915506 | orchestrator | 2025-03-27 01:01:55.915512 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-03-27 01:01:55.915518 | orchestrator | Thursday 27 March 2025 00:51:20 +0000 (0:00:00.894) 0:03:29.410 ******** 2025-03-27 01:01:55.915524 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.915530 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.915537 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.915543 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.915549 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.915555 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.915561 | orchestrator | 2025-03-27 01:01:55.915567 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-03-27 01:01:55.915574 | orchestrator | Thursday 27 March 2025 00:51:21 +0000 (0:00:00.983) 0:03:30.393 ******** 2025-03-27 01:01:55.915580 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.915586 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.915592 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.915598 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.915604 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.915610 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.915617 | orchestrator | 2025-03-27 01:01:55.915631 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.915638 | orchestrator | Thursday 27 March 2025 00:51:23 +0000 (0:00:01.474) 0:03:31.868 ******** 2025-03-27 01:01:55.915644 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.915655 | orchestrator | 2025-03-27 01:01:55.915661 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-03-27 01:01:55.915667 | orchestrator | Thursday 27 March 2025 00:51:24 +0000 (0:00:01.434) 0:03:33.302 ******** 2025-03-27 01:01:55.915673 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-03-27 01:01:55.915679 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-03-27 01:01:55.915685 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915691 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-03-27 01:01:55.915698 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-03-27 01:01:55.915704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915716 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-03-27 01:01:55.915757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915766 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915778 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-03-27 01:01:55.915784 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915791 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915803 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915815 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-03-27 01:01:55.915822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915828 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915834 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915840 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915846 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-03-27 01:01:55.915852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915858 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915876 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915882 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-03-27 01:01:55.915889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915901 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.915907 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-03-27 01:01:55.915919 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.915934 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915941 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.915947 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.915953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-03-27 01:01:55.915963 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.915969 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.915975 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.915981 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.915988 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.915994 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-03-27 01:01:55.916000 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.916006 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.916011 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.916017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916023 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.916029 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-03-27 01:01:55.916034 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.916040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916046 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.916052 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916057 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-03-27 01:01:55.916063 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916075 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916080 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916086 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916092 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-03-27 01:01:55.916098 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916103 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916150 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916158 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-03-27 01:01:55.916164 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916170 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916176 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916181 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916187 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-03-27 01:01:55.916193 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-03-27 01:01:55.916199 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916205 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916211 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-03-27 01:01:55.916217 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916223 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-03-27 01:01:55.916233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-03-27 01:01:55.916238 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-03-27 01:01:55.916244 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-03-27 01:01:55.916250 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-03-27 01:01:55.916256 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-03-27 01:01:55.916262 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-03-27 01:01:55.916267 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-03-27 01:01:55.916274 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-03-27 01:01:55.916279 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-03-27 01:01:55.916285 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-03-27 01:01:55.916291 | orchestrator | 2025-03-27 01:01:55.916297 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.916305 | orchestrator | Thursday 27 March 2025 00:51:31 +0000 (0:00:06.401) 0:03:39.704 ******** 2025-03-27 01:01:55.916311 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916317 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916323 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916329 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.916335 | orchestrator | 2025-03-27 01:01:55.916341 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-03-27 01:01:55.916347 | orchestrator | Thursday 27 March 2025 00:51:32 +0000 (0:00:01.653) 0:03:41.357 ******** 2025-03-27 01:01:55.916353 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916358 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916364 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916370 | orchestrator | 2025-03-27 01:01:55.916376 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-03-27 01:01:55.916381 | orchestrator | Thursday 27 March 2025 00:51:34 +0000 (0:00:01.119) 0:03:42.477 ******** 2025-03-27 01:01:55.916387 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916393 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916399 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.916405 | orchestrator | 2025-03-27 01:01:55.916410 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.916416 | orchestrator | Thursday 27 March 2025 00:51:35 +0000 (0:00:01.640) 0:03:44.117 ******** 2025-03-27 01:01:55.916422 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916428 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916434 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916451 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.916458 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.916464 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.916470 | orchestrator | 2025-03-27 01:01:55.916475 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.916481 | orchestrator | Thursday 27 March 2025 00:51:36 +0000 (0:00:00.855) 0:03:44.972 ******** 2025-03-27 01:01:55.916487 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916493 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916502 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916508 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.916514 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.916520 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.916526 | orchestrator | 2025-03-27 01:01:55.916531 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.916537 | orchestrator | Thursday 27 March 2025 00:51:37 +0000 (0:00:00.981) 0:03:45.953 ******** 2025-03-27 01:01:55.916543 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916580 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916588 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916594 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916600 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916606 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916612 | orchestrator | 2025-03-27 01:01:55.916617 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.916623 | orchestrator | Thursday 27 March 2025 00:51:38 +0000 (0:00:00.807) 0:03:46.761 ******** 2025-03-27 01:01:55.916629 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916635 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916641 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916646 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916652 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916658 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916664 | orchestrator | 2025-03-27 01:01:55.916670 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.916676 | orchestrator | Thursday 27 March 2025 00:51:39 +0000 (0:00:00.955) 0:03:47.717 ******** 2025-03-27 01:01:55.916681 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916687 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916693 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916699 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916705 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916711 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916716 | orchestrator | 2025-03-27 01:01:55.916722 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.916728 | orchestrator | Thursday 27 March 2025 00:51:40 +0000 (0:00:00.828) 0:03:48.545 ******** 2025-03-27 01:01:55.916734 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916740 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916746 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916751 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916757 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916767 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916773 | orchestrator | 2025-03-27 01:01:55.916779 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.916785 | orchestrator | Thursday 27 March 2025 00:51:41 +0000 (0:00:00.990) 0:03:49.536 ******** 2025-03-27 01:01:55.916791 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916797 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916803 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916809 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916814 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916820 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916826 | orchestrator | 2025-03-27 01:01:55.916832 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.916838 | orchestrator | Thursday 27 March 2025 00:51:41 +0000 (0:00:00.766) 0:03:50.303 ******** 2025-03-27 01:01:55.916843 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916849 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916855 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916865 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.916871 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.916877 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.916883 | orchestrator | 2025-03-27 01:01:55.916888 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.916894 | orchestrator | Thursday 27 March 2025 00:51:42 +0000 (0:00:01.020) 0:03:51.323 ******** 2025-03-27 01:01:55.916900 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916906 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916912 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916918 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.916923 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.916929 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.916935 | orchestrator | 2025-03-27 01:01:55.916941 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.916946 | orchestrator | Thursday 27 March 2025 00:51:45 +0000 (0:00:02.348) 0:03:53.672 ******** 2025-03-27 01:01:55.916952 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.916958 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.916964 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.916969 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.916975 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.916981 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.916987 | orchestrator | 2025-03-27 01:01:55.916992 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.916998 | orchestrator | Thursday 27 March 2025 00:51:46 +0000 (0:00:00.964) 0:03:54.636 ******** 2025-03-27 01:01:55.917004 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.917010 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.917016 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917022 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.917030 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.917036 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917042 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.917048 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.917054 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917060 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.917065 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.917071 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917077 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.917083 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.917089 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917095 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.917100 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.917106 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917115 | orchestrator | 2025-03-27 01:01:55.917121 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.917156 | orchestrator | Thursday 27 March 2025 00:51:46 +0000 (0:00:00.733) 0:03:55.369 ******** 2025-03-27 01:01:55.917165 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-27 01:01:55.917175 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-27 01:01:55.917181 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917187 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-27 01:01:55.917193 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-27 01:01:55.917199 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917205 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-27 01:01:55.917211 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-27 01:01:55.917216 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917226 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-03-27 01:01:55.917232 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-03-27 01:01:55.917238 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-03-27 01:01:55.917244 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-03-27 01:01:55.917249 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-03-27 01:01:55.917255 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-03-27 01:01:55.917261 | orchestrator | 2025-03-27 01:01:55.917266 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.917272 | orchestrator | Thursday 27 March 2025 00:51:47 +0000 (0:00:01.061) 0:03:56.431 ******** 2025-03-27 01:01:55.917278 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917284 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917290 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917295 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.917301 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.917307 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.917313 | orchestrator | 2025-03-27 01:01:55.917318 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.917324 | orchestrator | Thursday 27 March 2025 00:51:48 +0000 (0:00:00.798) 0:03:57.230 ******** 2025-03-27 01:01:55.917330 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917336 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917341 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917347 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917353 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917359 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917364 | orchestrator | 2025-03-27 01:01:55.917370 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.917376 | orchestrator | Thursday 27 March 2025 00:51:49 +0000 (0:00:00.930) 0:03:58.161 ******** 2025-03-27 01:01:55.917382 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917388 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917393 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917399 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917407 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917413 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917419 | orchestrator | 2025-03-27 01:01:55.917425 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.917431 | orchestrator | Thursday 27 March 2025 00:51:50 +0000 (0:00:00.803) 0:03:58.965 ******** 2025-03-27 01:01:55.917436 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917456 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917462 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917467 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917473 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917479 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917485 | orchestrator | 2025-03-27 01:01:55.917493 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.917499 | orchestrator | Thursday 27 March 2025 00:51:51 +0000 (0:00:01.007) 0:03:59.973 ******** 2025-03-27 01:01:55.917505 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917511 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917517 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917522 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917528 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917534 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917540 | orchestrator | 2025-03-27 01:01:55.917545 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.917551 | orchestrator | Thursday 27 March 2025 00:51:52 +0000 (0:00:00.655) 0:04:00.628 ******** 2025-03-27 01:01:55.917560 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917566 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917572 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917578 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.917583 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.917589 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.917595 | orchestrator | 2025-03-27 01:01:55.917601 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.917606 | orchestrator | Thursday 27 March 2025 00:51:53 +0000 (0:00:00.890) 0:04:01.519 ******** 2025-03-27 01:01:55.917612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.917618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.917624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.917630 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917635 | orchestrator | 2025-03-27 01:01:55.917641 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.917647 | orchestrator | Thursday 27 March 2025 00:51:53 +0000 (0:00:00.441) 0:04:01.960 ******** 2025-03-27 01:01:55.917653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.917659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.917664 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.917670 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917687 | orchestrator | 2025-03-27 01:01:55.917726 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.917734 | orchestrator | Thursday 27 March 2025 00:51:53 +0000 (0:00:00.383) 0:04:02.343 ******** 2025-03-27 01:01:55.917740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.917746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.917752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.917757 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917763 | orchestrator | 2025-03-27 01:01:55.917769 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.917775 | orchestrator | Thursday 27 March 2025 00:51:54 +0000 (0:00:00.430) 0:04:02.774 ******** 2025-03-27 01:01:55.917781 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917786 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917792 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917798 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.917804 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.917809 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.917815 | orchestrator | 2025-03-27 01:01:55.917821 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.917827 | orchestrator | Thursday 27 March 2025 00:51:55 +0000 (0:00:01.186) 0:04:03.960 ******** 2025-03-27 01:01:55.917833 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.917839 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917844 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.917850 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917856 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.917862 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917868 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-27 01:01:55.917873 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-27 01:01:55.917879 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-27 01:01:55.917885 | orchestrator | 2025-03-27 01:01:55.917891 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.917896 | orchestrator | Thursday 27 March 2025 00:51:57 +0000 (0:00:01.605) 0:04:05.566 ******** 2025-03-27 01:01:55.917902 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917908 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917918 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917924 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917929 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917935 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917941 | orchestrator | 2025-03-27 01:01:55.917947 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.917952 | orchestrator | Thursday 27 March 2025 00:51:58 +0000 (0:00:00.969) 0:04:06.536 ******** 2025-03-27 01:01:55.917958 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.917964 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.917970 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.917975 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.917981 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.917987 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.917993 | orchestrator | 2025-03-27 01:01:55.917998 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.918004 | orchestrator | Thursday 27 March 2025 00:51:58 +0000 (0:00:00.790) 0:04:07.327 ******** 2025-03-27 01:01:55.918010 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.918031 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918038 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.918043 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918049 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.918055 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918061 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.918067 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918076 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.918082 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.918087 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.918093 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.918099 | orchestrator | 2025-03-27 01:01:55.918105 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.918111 | orchestrator | Thursday 27 March 2025 00:52:00 +0000 (0:00:01.668) 0:04:08.996 ******** 2025-03-27 01:01:55.918116 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918122 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918128 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918134 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.918140 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918146 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.918151 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.918157 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.918163 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.918169 | orchestrator | 2025-03-27 01:01:55.918175 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.918180 | orchestrator | Thursday 27 March 2025 00:52:01 +0000 (0:00:00.926) 0:04:09.922 ******** 2025-03-27 01:01:55.918186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.918192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.918198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.918204 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918209 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 01:01:55.918246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 01:01:55.918254 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 01:01:55.918264 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918270 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 01:01:55.918276 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 01:01:55.918281 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 01:01:55.918287 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.918299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.918304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.918310 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.918316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.918322 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918328 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.918333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.918339 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.918345 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.918351 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.918356 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.918362 | orchestrator | 2025-03-27 01:01:55.918368 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.918374 | orchestrator | Thursday 27 March 2025 00:52:03 +0000 (0:00:02.189) 0:04:12.111 ******** 2025-03-27 01:01:55.918379 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.918385 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.918391 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.918397 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.918402 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.918408 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.918414 | orchestrator | 2025-03-27 01:01:55.918420 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.918425 | orchestrator | Thursday 27 March 2025 00:52:10 +0000 (0:00:06.477) 0:04:18.589 ******** 2025-03-27 01:01:55.918431 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.918437 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.918476 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.918483 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.918488 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.918494 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.918500 | orchestrator | 2025-03-27 01:01:55.918506 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-03-27 01:01:55.918512 | orchestrator | Thursday 27 March 2025 00:52:11 +0000 (0:00:01.473) 0:04:20.063 ******** 2025-03-27 01:01:55.918517 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918523 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.918529 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.918535 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.918541 | orchestrator | 2025-03-27 01:01:55.918546 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-03-27 01:01:55.918552 | orchestrator | Thursday 27 March 2025 00:52:12 +0000 (0:00:00.962) 0:04:21.025 ******** 2025-03-27 01:01:55.918558 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.918564 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.918570 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.918575 | orchestrator | 2025-03-27 01:01:55.918585 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-03-27 01:01:55.918591 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.918597 | orchestrator | 2025-03-27 01:01:55.918607 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-27 01:01:55.918613 | orchestrator | Thursday 27 March 2025 00:52:13 +0000 (0:00:01.191) 0:04:22.217 ******** 2025-03-27 01:01:55.918618 | orchestrator | 2025-03-27 01:01:55.918624 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-03-27 01:01:55.918630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.918636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.918641 | orchestrator | 2025-03-27 01:01:55.918647 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-27 01:01:55.918662 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.918668 | orchestrator | 2025-03-27 01:01:55.918674 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-03-27 01:01:55.918680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.918686 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918692 | orchestrator | 2025-03-27 01:01:55.918697 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-27 01:01:55.918703 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.918709 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.918715 | orchestrator | 2025-03-27 01:01:55.918721 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-03-27 01:01:55.918727 | orchestrator | Thursday 27 March 2025 00:52:15 +0000 (0:00:01.283) 0:04:23.501 ******** 2025-03-27 01:01:55.918732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.918741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.918747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.918752 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918758 | orchestrator | 2025-03-27 01:01:55.918764 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-03-27 01:01:55.918770 | orchestrator | Thursday 27 March 2025 00:52:15 +0000 (0:00:00.787) 0:04:24.288 ******** 2025-03-27 01:01:55.918776 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.918815 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.918823 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.918829 | orchestrator | 2025-03-27 01:01:55.918835 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-03-27 01:01:55.918841 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918846 | orchestrator | 2025-03-27 01:01:55.918852 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-03-27 01:01:55.918858 | orchestrator | Thursday 27 March 2025 00:52:16 +0000 (0:00:00.708) 0:04:24.997 ******** 2025-03-27 01:01:55.918863 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918869 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918874 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918879 | orchestrator | 2025-03-27 01:01:55.918885 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-03-27 01:01:55.918890 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918895 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.918900 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.918905 | orchestrator | 2025-03-27 01:01:55.918911 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-27 01:01:55.918916 | orchestrator | Thursday 27 March 2025 00:52:17 +0000 (0:00:00.924) 0:04:25.921 ******** 2025-03-27 01:01:55.918921 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918926 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918931 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918936 | orchestrator | 2025-03-27 01:01:55.918942 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-03-27 01:01:55.918947 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.918955 | orchestrator | 2025-03-27 01:01:55.918960 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-27 01:01:55.918969 | orchestrator | Thursday 27 March 2025 00:52:18 +0000 (0:00:00.716) 0:04:26.638 ******** 2025-03-27 01:01:55.918974 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.918979 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.918984 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.918990 | orchestrator | 2025-03-27 01:01:55.918995 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-03-27 01:01:55.919000 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919005 | orchestrator | 2025-03-27 01:01:55.919010 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-03-27 01:01:55.919015 | orchestrator | Thursday 27 March 2025 00:52:19 +0000 (0:00:01.123) 0:04:27.762 ******** 2025-03-27 01:01:55.919021 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919026 | orchestrator | 2025-03-27 01:01:55.919031 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-03-27 01:01:55.919036 | orchestrator | Thursday 27 March 2025 00:52:19 +0000 (0:00:00.161) 0:04:27.924 ******** 2025-03-27 01:01:55.919042 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.919047 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.919052 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.919057 | orchestrator | 2025-03-27 01:01:55.919062 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-03-27 01:01:55.919068 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919073 | orchestrator | 2025-03-27 01:01:55.919078 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-27 01:01:55.919083 | orchestrator | Thursday 27 March 2025 00:52:20 +0000 (0:00:00.643) 0:04:28.568 ******** 2025-03-27 01:01:55.919088 | orchestrator | 2025-03-27 01:01:55.919094 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-03-27 01:01:55.919099 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.919109 | orchestrator | 2025-03-27 01:01:55.919115 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-03-27 01:01:55.919120 | orchestrator | Thursday 27 March 2025 00:52:21 +0000 (0:00:01.110) 0:04:29.678 ******** 2025-03-27 01:01:55.919125 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.919130 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.919136 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.919141 | orchestrator | 2025-03-27 01:01:55.919148 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-03-27 01:01:55.919154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.919159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.919164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.919169 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919174 | orchestrator | 2025-03-27 01:01:55.919180 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-27 01:01:55.919185 | orchestrator | Thursday 27 March 2025 00:52:22 +0000 (0:00:00.802) 0:04:30.480 ******** 2025-03-27 01:01:55.919190 | orchestrator | 2025-03-27 01:01:55.919195 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-03-27 01:01:55.919201 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919206 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.919211 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.919216 | orchestrator | 2025-03-27 01:01:55.919221 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-27 01:01:55.919227 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.919232 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.919237 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.919242 | orchestrator | 2025-03-27 01:01:55.919248 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-03-27 01:01:55.919256 | orchestrator | Thursday 27 March 2025 00:52:23 +0000 (0:00:01.570) 0:04:32.051 ******** 2025-03-27 01:01:55.919261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.919267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.919272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.919277 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.919282 | orchestrator | 2025-03-27 01:01:55.919287 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-03-27 01:01:55.919320 | orchestrator | Thursday 27 March 2025 00:52:24 +0000 (0:00:01.155) 0:04:33.206 ******** 2025-03-27 01:01:55.919327 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.919333 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.919338 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.919343 | orchestrator | 2025-03-27 01:01:55.919348 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-03-27 01:01:55.919354 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919359 | orchestrator | 2025-03-27 01:01:55.919364 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-27 01:01:55.919369 | orchestrator | Thursday 27 March 2025 00:52:25 +0000 (0:00:00.933) 0:04:34.140 ******** 2025-03-27 01:01:55.919375 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.919380 | orchestrator | 2025-03-27 01:01:55.919386 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-03-27 01:01:55.919391 | orchestrator | Thursday 27 March 2025 00:52:26 +0000 (0:00:00.860) 0:04:35.000 ******** 2025-03-27 01:01:55.919396 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.919401 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.919406 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.919411 | orchestrator | 2025-03-27 01:01:55.919417 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-03-27 01:01:55.919422 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.919427 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.919432 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.919438 | orchestrator | 2025-03-27 01:01:55.919454 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-03-27 01:01:55.919460 | orchestrator | Thursday 27 March 2025 00:52:27 +0000 (0:00:01.111) 0:04:36.111 ******** 2025-03-27 01:01:55.919465 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.919470 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.919475 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.919480 | orchestrator | 2025-03-27 01:01:55.919485 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.919491 | orchestrator | Thursday 27 March 2025 00:52:29 +0000 (0:00:01.695) 0:04:37.807 ******** 2025-03-27 01:01:55.919496 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.919501 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.919506 | orchestrator | 2025-03-27 01:01:55.919511 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-03-27 01:01:55.919516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.919522 | orchestrator | 2025-03-27 01:01:55.919527 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.919532 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.919537 | orchestrator | 2025-03-27 01:01:55.919542 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-03-27 01:01:55.919548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.919553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.919558 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919563 | orchestrator | 2025-03-27 01:01:55.919568 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-03-27 01:01:55.919579 | orchestrator | Thursday 27 March 2025 00:52:30 +0000 (0:00:01.218) 0:04:39.025 ******** 2025-03-27 01:01:55.919584 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.919590 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.919595 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.919600 | orchestrator | 2025-03-27 01:01:55.919605 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-27 01:01:55.919611 | orchestrator | Thursday 27 March 2025 00:52:31 +0000 (0:00:01.114) 0:04:40.140 ******** 2025-03-27 01:01:55.919616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.919621 | orchestrator | 2025-03-27 01:01:55.919626 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-03-27 01:01:55.919632 | orchestrator | Thursday 27 March 2025 00:52:32 +0000 (0:00:00.836) 0:04:40.977 ******** 2025-03-27 01:01:55.919637 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.919642 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.919647 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.919653 | orchestrator | 2025-03-27 01:01:55.919658 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-03-27 01:01:55.919663 | orchestrator | Thursday 27 March 2025 00:52:32 +0000 (0:00:00.354) 0:04:41.331 ******** 2025-03-27 01:01:55.919668 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.919674 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.919679 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.919684 | orchestrator | 2025-03-27 01:01:55.919689 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-03-27 01:01:55.919694 | orchestrator | Thursday 27 March 2025 00:52:34 +0000 (0:00:01.344) 0:04:42.676 ******** 2025-03-27 01:01:55.919699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.919708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.919713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.919718 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919724 | orchestrator | 2025-03-27 01:01:55.919729 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-03-27 01:01:55.919734 | orchestrator | Thursday 27 March 2025 00:52:35 +0000 (0:00:01.087) 0:04:43.764 ******** 2025-03-27 01:01:55.919739 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.919745 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.919750 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.919755 | orchestrator | 2025-03-27 01:01:55.919760 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-03-27 01:01:55.919772 | orchestrator | Thursday 27 March 2025 00:52:35 +0000 (0:00:00.576) 0:04:44.340 ******** 2025-03-27 01:01:55.919778 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919787 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.919792 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.919798 | orchestrator | 2025-03-27 01:01:55.919803 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-27 01:01:55.919839 | orchestrator | Thursday 27 March 2025 00:52:36 +0000 (0:00:00.361) 0:04:44.701 ******** 2025-03-27 01:01:55.919847 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919852 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.919858 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.919863 | orchestrator | 2025-03-27 01:01:55.919869 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-03-27 01:01:55.919874 | orchestrator | Thursday 27 March 2025 00:52:36 +0000 (0:00:00.393) 0:04:45.095 ******** 2025-03-27 01:01:55.919879 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.919885 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.919890 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.919895 | orchestrator | 2025-03-27 01:01:55.919901 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.919910 | orchestrator | Thursday 27 March 2025 00:52:37 +0000 (0:00:00.367) 0:04:45.462 ******** 2025-03-27 01:01:55.919915 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.919921 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.919926 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.919932 | orchestrator | 2025-03-27 01:01:55.919937 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-03-27 01:01:55.919942 | orchestrator | 2025-03-27 01:01:55.919948 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.919953 | orchestrator | Thursday 27 March 2025 00:52:39 +0000 (0:00:02.424) 0:04:47.887 ******** 2025-03-27 01:01:55.919958 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.919964 | orchestrator | 2025-03-27 01:01:55.919969 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.919975 | orchestrator | Thursday 27 March 2025 00:52:40 +0000 (0:00:00.634) 0:04:48.522 ******** 2025-03-27 01:01:55.919980 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.919985 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.919991 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.919996 | orchestrator | 2025-03-27 01:01:55.920001 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.920007 | orchestrator | Thursday 27 March 2025 00:52:41 +0000 (0:00:01.114) 0:04:49.636 ******** 2025-03-27 01:01:55.920012 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920017 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920023 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920028 | orchestrator | 2025-03-27 01:01:55.920033 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.920039 | orchestrator | Thursday 27 March 2025 00:52:41 +0000 (0:00:00.390) 0:04:50.027 ******** 2025-03-27 01:01:55.920044 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920049 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920055 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920060 | orchestrator | 2025-03-27 01:01:55.920065 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.920071 | orchestrator | Thursday 27 March 2025 00:52:41 +0000 (0:00:00.380) 0:04:50.407 ******** 2025-03-27 01:01:55.920076 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920081 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920087 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920092 | orchestrator | 2025-03-27 01:01:55.920097 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.920103 | orchestrator | Thursday 27 March 2025 00:52:42 +0000 (0:00:00.345) 0:04:50.752 ******** 2025-03-27 01:01:55.920108 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.920113 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.920119 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.920124 | orchestrator | 2025-03-27 01:01:55.920129 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.920135 | orchestrator | Thursday 27 March 2025 00:52:43 +0000 (0:00:01.058) 0:04:51.811 ******** 2025-03-27 01:01:55.920140 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920145 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920151 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920156 | orchestrator | 2025-03-27 01:01:55.920162 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.920167 | orchestrator | Thursday 27 March 2025 00:52:43 +0000 (0:00:00.386) 0:04:52.198 ******** 2025-03-27 01:01:55.920172 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920178 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920183 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920188 | orchestrator | 2025-03-27 01:01:55.920194 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.920202 | orchestrator | Thursday 27 March 2025 00:52:44 +0000 (0:00:00.382) 0:04:52.580 ******** 2025-03-27 01:01:55.920208 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920213 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920218 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920224 | orchestrator | 2025-03-27 01:01:55.920229 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.920234 | orchestrator | Thursday 27 March 2025 00:52:44 +0000 (0:00:00.400) 0:04:52.980 ******** 2025-03-27 01:01:55.920240 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920245 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920251 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920256 | orchestrator | 2025-03-27 01:01:55.920261 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.920269 | orchestrator | Thursday 27 March 2025 00:52:45 +0000 (0:00:00.701) 0:04:53.682 ******** 2025-03-27 01:01:55.920275 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920280 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920286 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920291 | orchestrator | 2025-03-27 01:01:55.920297 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.920302 | orchestrator | Thursday 27 March 2025 00:52:45 +0000 (0:00:00.457) 0:04:54.140 ******** 2025-03-27 01:01:55.920307 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.920313 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.920347 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.920355 | orchestrator | 2025-03-27 01:01:55.920361 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.920366 | orchestrator | Thursday 27 March 2025 00:52:46 +0000 (0:00:00.793) 0:04:54.933 ******** 2025-03-27 01:01:55.920371 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920377 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920382 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920388 | orchestrator | 2025-03-27 01:01:55.920393 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.920398 | orchestrator | Thursday 27 March 2025 00:52:46 +0000 (0:00:00.421) 0:04:55.355 ******** 2025-03-27 01:01:55.920404 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.920409 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.920414 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.920423 | orchestrator | 2025-03-27 01:01:55.920429 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.920434 | orchestrator | Thursday 27 March 2025 00:52:47 +0000 (0:00:00.744) 0:04:56.099 ******** 2025-03-27 01:01:55.920453 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920458 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920464 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920469 | orchestrator | 2025-03-27 01:01:55.920474 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.920479 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.404) 0:04:56.503 ******** 2025-03-27 01:01:55.920484 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920490 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920495 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920500 | orchestrator | 2025-03-27 01:01:55.920505 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.920510 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.347) 0:04:56.850 ******** 2025-03-27 01:01:55.920516 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920521 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920526 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920531 | orchestrator | 2025-03-27 01:01:55.920536 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.920545 | orchestrator | Thursday 27 March 2025 00:52:48 +0000 (0:00:00.388) 0:04:57.239 ******** 2025-03-27 01:01:55.920550 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920556 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920561 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920566 | orchestrator | 2025-03-27 01:01:55.920572 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.920577 | orchestrator | Thursday 27 March 2025 00:52:49 +0000 (0:00:00.673) 0:04:57.913 ******** 2025-03-27 01:01:55.920582 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920587 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920592 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920598 | orchestrator | 2025-03-27 01:01:55.920603 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.920608 | orchestrator | Thursday 27 March 2025 00:52:49 +0000 (0:00:00.343) 0:04:58.257 ******** 2025-03-27 01:01:55.920613 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.920618 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.920624 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.920629 | orchestrator | 2025-03-27 01:01:55.920634 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.920642 | orchestrator | Thursday 27 March 2025 00:52:50 +0000 (0:00:00.391) 0:04:58.648 ******** 2025-03-27 01:01:55.920648 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.920653 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.920658 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.920663 | orchestrator | 2025-03-27 01:01:55.920669 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.920674 | orchestrator | Thursday 27 March 2025 00:52:50 +0000 (0:00:00.677) 0:04:59.326 ******** 2025-03-27 01:01:55.920679 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920684 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920690 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920695 | orchestrator | 2025-03-27 01:01:55.920700 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.920705 | orchestrator | Thursday 27 March 2025 00:52:51 +0000 (0:00:00.372) 0:04:59.699 ******** 2025-03-27 01:01:55.920710 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920716 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920721 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920726 | orchestrator | 2025-03-27 01:01:55.920731 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.920736 | orchestrator | Thursday 27 March 2025 00:52:51 +0000 (0:00:00.458) 0:05:00.158 ******** 2025-03-27 01:01:55.920742 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920747 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920752 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920757 | orchestrator | 2025-03-27 01:01:55.920762 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.920768 | orchestrator | Thursday 27 March 2025 00:52:52 +0000 (0:00:00.398) 0:05:00.556 ******** 2025-03-27 01:01:55.920773 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920778 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920783 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920788 | orchestrator | 2025-03-27 01:01:55.920794 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.920799 | orchestrator | Thursday 27 March 2025 00:52:52 +0000 (0:00:00.643) 0:05:01.199 ******** 2025-03-27 01:01:55.920821 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920827 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920832 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920837 | orchestrator | 2025-03-27 01:01:55.920843 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.920851 | orchestrator | Thursday 27 March 2025 00:52:53 +0000 (0:00:00.430) 0:05:01.630 ******** 2025-03-27 01:01:55.920860 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920866 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920901 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920909 | orchestrator | 2025-03-27 01:01:55.920914 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.920919 | orchestrator | Thursday 27 March 2025 00:52:53 +0000 (0:00:00.406) 0:05:02.036 ******** 2025-03-27 01:01:55.920925 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920930 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920935 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920940 | orchestrator | 2025-03-27 01:01:55.920945 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.920951 | orchestrator | Thursday 27 March 2025 00:52:54 +0000 (0:00:00.416) 0:05:02.452 ******** 2025-03-27 01:01:55.920959 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920965 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.920970 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.920975 | orchestrator | 2025-03-27 01:01:55.920980 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.920985 | orchestrator | Thursday 27 March 2025 00:52:54 +0000 (0:00:00.677) 0:05:03.130 ******** 2025-03-27 01:01:55.920991 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.920996 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921001 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921006 | orchestrator | 2025-03-27 01:01:55.921011 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.921017 | orchestrator | Thursday 27 March 2025 00:52:55 +0000 (0:00:00.468) 0:05:03.599 ******** 2025-03-27 01:01:55.921022 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921027 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921032 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921040 | orchestrator | 2025-03-27 01:01:55.921046 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.921051 | orchestrator | Thursday 27 March 2025 00:52:55 +0000 (0:00:00.407) 0:05:04.006 ******** 2025-03-27 01:01:55.921056 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921061 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921066 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921072 | orchestrator | 2025-03-27 01:01:55.921077 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.921082 | orchestrator | Thursday 27 March 2025 00:52:55 +0000 (0:00:00.403) 0:05:04.410 ******** 2025-03-27 01:01:55.921087 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921093 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921098 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921103 | orchestrator | 2025-03-27 01:01:55.921108 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.921113 | orchestrator | Thursday 27 March 2025 00:52:56 +0000 (0:00:00.522) 0:05:04.932 ******** 2025-03-27 01:01:55.921119 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.921124 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.921129 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921134 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.921139 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.921145 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921150 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.921155 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.921160 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921165 | orchestrator | 2025-03-27 01:01:55.921171 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.921180 | orchestrator | Thursday 27 March 2025 00:52:56 +0000 (0:00:00.412) 0:05:05.345 ******** 2025-03-27 01:01:55.921185 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-27 01:01:55.921191 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-27 01:01:55.921196 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921201 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-27 01:01:55.921206 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-27 01:01:55.921211 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921217 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-27 01:01:55.921222 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-27 01:01:55.921227 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921232 | orchestrator | 2025-03-27 01:01:55.921238 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.921243 | orchestrator | Thursday 27 March 2025 00:52:57 +0000 (0:00:00.347) 0:05:05.692 ******** 2025-03-27 01:01:55.921248 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921253 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921258 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921263 | orchestrator | 2025-03-27 01:01:55.921269 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.921274 | orchestrator | Thursday 27 March 2025 00:52:57 +0000 (0:00:00.309) 0:05:06.002 ******** 2025-03-27 01:01:55.921279 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921284 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921289 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921294 | orchestrator | 2025-03-27 01:01:55.921300 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.921305 | orchestrator | Thursday 27 March 2025 00:52:58 +0000 (0:00:00.621) 0:05:06.623 ******** 2025-03-27 01:01:55.921310 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921315 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921321 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921326 | orchestrator | 2025-03-27 01:01:55.921331 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.921336 | orchestrator | Thursday 27 March 2025 00:52:58 +0000 (0:00:00.360) 0:05:06.984 ******** 2025-03-27 01:01:55.921368 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921376 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921381 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921386 | orchestrator | 2025-03-27 01:01:55.921392 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.921397 | orchestrator | Thursday 27 March 2025 00:52:58 +0000 (0:00:00.435) 0:05:07.419 ******** 2025-03-27 01:01:55.921402 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921407 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921413 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921418 | orchestrator | 2025-03-27 01:01:55.921423 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.921429 | orchestrator | Thursday 27 March 2025 00:52:59 +0000 (0:00:00.371) 0:05:07.790 ******** 2025-03-27 01:01:55.921434 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921469 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921475 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921481 | orchestrator | 2025-03-27 01:01:55.921486 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.921491 | orchestrator | Thursday 27 March 2025 00:53:00 +0000 (0:00:00.680) 0:05:08.470 ******** 2025-03-27 01:01:55.921497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.921502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.921511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.921517 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921522 | orchestrator | 2025-03-27 01:01:55.921527 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.921532 | orchestrator | Thursday 27 March 2025 00:53:00 +0000 (0:00:00.519) 0:05:08.989 ******** 2025-03-27 01:01:55.921538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.921543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.921548 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.921553 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921559 | orchestrator | 2025-03-27 01:01:55.921564 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.921569 | orchestrator | Thursday 27 March 2025 00:53:01 +0000 (0:00:00.474) 0:05:09.464 ******** 2025-03-27 01:01:55.921575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.921580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.921585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.921590 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921596 | orchestrator | 2025-03-27 01:01:55.921601 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.921611 | orchestrator | Thursday 27 March 2025 00:53:01 +0000 (0:00:00.573) 0:05:10.037 ******** 2025-03-27 01:01:55.921616 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921621 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921626 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921632 | orchestrator | 2025-03-27 01:01:55.921637 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.921642 | orchestrator | Thursday 27 March 2025 00:53:01 +0000 (0:00:00.401) 0:05:10.439 ******** 2025-03-27 01:01:55.921647 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.921653 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921658 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.921663 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921668 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.921674 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921679 | orchestrator | 2025-03-27 01:01:55.921684 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.921689 | orchestrator | Thursday 27 March 2025 00:53:02 +0000 (0:00:00.537) 0:05:10.976 ******** 2025-03-27 01:01:55.921695 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921708 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921713 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921719 | orchestrator | 2025-03-27 01:01:55.921724 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.921729 | orchestrator | Thursday 27 March 2025 00:53:03 +0000 (0:00:00.712) 0:05:11.689 ******** 2025-03-27 01:01:55.921735 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921740 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921745 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921750 | orchestrator | 2025-03-27 01:01:55.921755 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.921761 | orchestrator | Thursday 27 March 2025 00:53:03 +0000 (0:00:00.462) 0:05:12.151 ******** 2025-03-27 01:01:55.921766 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.921771 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921776 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.921781 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921787 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.921792 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921801 | orchestrator | 2025-03-27 01:01:55.921806 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.921811 | orchestrator | Thursday 27 March 2025 00:53:04 +0000 (0:00:00.523) 0:05:12.675 ******** 2025-03-27 01:01:55.921816 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921822 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921827 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921835 | orchestrator | 2025-03-27 01:01:55.921840 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.921845 | orchestrator | Thursday 27 March 2025 00:53:04 +0000 (0:00:00.371) 0:05:13.046 ******** 2025-03-27 01:01:55.921851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.921871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.921878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.921883 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921888 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 01:01:55.921894 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 01:01:55.921899 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 01:01:55.921904 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921910 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 01:01:55.921917 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 01:01:55.921923 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 01:01:55.921928 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921933 | orchestrator | 2025-03-27 01:01:55.921939 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.921944 | orchestrator | Thursday 27 March 2025 00:53:05 +0000 (0:00:01.224) 0:05:14.271 ******** 2025-03-27 01:01:55.921949 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921954 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921960 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921965 | orchestrator | 2025-03-27 01:01:55.921970 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.921975 | orchestrator | Thursday 27 March 2025 00:53:06 +0000 (0:00:00.859) 0:05:15.131 ******** 2025-03-27 01:01:55.921981 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.921986 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.921991 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.921996 | orchestrator | 2025-03-27 01:01:55.922001 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.922006 | orchestrator | Thursday 27 March 2025 00:53:07 +0000 (0:00:00.650) 0:05:15.781 ******** 2025-03-27 01:01:55.922011 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922032 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922037 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922043 | orchestrator | 2025-03-27 01:01:55.922048 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.922054 | orchestrator | Thursday 27 March 2025 00:53:08 +0000 (0:00:00.896) 0:05:16.678 ******** 2025-03-27 01:01:55.922059 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922065 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922070 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922076 | orchestrator | 2025-03-27 01:01:55.922081 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-03-27 01:01:55.922087 | orchestrator | Thursday 27 March 2025 00:53:08 +0000 (0:00:00.660) 0:05:17.339 ******** 2025-03-27 01:01:55.922093 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922098 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922104 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922109 | orchestrator | 2025-03-27 01:01:55.922115 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-03-27 01:01:55.922124 | orchestrator | Thursday 27 March 2025 00:53:09 +0000 (0:00:00.425) 0:05:17.765 ******** 2025-03-27 01:01:55.922130 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.922135 | orchestrator | 2025-03-27 01:01:55.922141 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-03-27 01:01:55.922146 | orchestrator | Thursday 27 March 2025 00:53:10 +0000 (0:00:00.955) 0:05:18.720 ******** 2025-03-27 01:01:55.922152 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922157 | orchestrator | 2025-03-27 01:01:55.922163 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-03-27 01:01:55.922168 | orchestrator | Thursday 27 March 2025 00:53:10 +0000 (0:00:00.194) 0:05:18.915 ******** 2025-03-27 01:01:55.922174 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-27 01:01:55.922179 | orchestrator | 2025-03-27 01:01:55.922185 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-03-27 01:01:55.922190 | orchestrator | Thursday 27 March 2025 00:53:11 +0000 (0:00:00.761) 0:05:19.676 ******** 2025-03-27 01:01:55.922196 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922201 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922207 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922212 | orchestrator | 2025-03-27 01:01:55.922218 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-03-27 01:01:55.922226 | orchestrator | Thursday 27 March 2025 00:53:11 +0000 (0:00:00.414) 0:05:20.091 ******** 2025-03-27 01:01:55.922231 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922237 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922242 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922248 | orchestrator | 2025-03-27 01:01:55.922253 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-03-27 01:01:55.922259 | orchestrator | Thursday 27 March 2025 00:53:12 +0000 (0:00:00.756) 0:05:20.847 ******** 2025-03-27 01:01:55.922264 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922270 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922276 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922281 | orchestrator | 2025-03-27 01:01:55.922287 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-03-27 01:01:55.922292 | orchestrator | Thursday 27 March 2025 00:53:13 +0000 (0:00:01.224) 0:05:22.072 ******** 2025-03-27 01:01:55.922298 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922304 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922309 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922315 | orchestrator | 2025-03-27 01:01:55.922320 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-03-27 01:01:55.922326 | orchestrator | Thursday 27 March 2025 00:53:14 +0000 (0:00:00.896) 0:05:22.968 ******** 2025-03-27 01:01:55.922331 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922337 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922343 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922348 | orchestrator | 2025-03-27 01:01:55.922354 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-03-27 01:01:55.922371 | orchestrator | Thursday 27 March 2025 00:53:15 +0000 (0:00:00.780) 0:05:23.748 ******** 2025-03-27 01:01:55.922377 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922382 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922387 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922392 | orchestrator | 2025-03-27 01:01:55.922397 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-03-27 01:01:55.922402 | orchestrator | Thursday 27 March 2025 00:53:16 +0000 (0:00:01.112) 0:05:24.861 ******** 2025-03-27 01:01:55.922406 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922411 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922416 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922421 | orchestrator | 2025-03-27 01:01:55.922426 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-03-27 01:01:55.922434 | orchestrator | Thursday 27 March 2025 00:53:16 +0000 (0:00:00.354) 0:05:25.215 ******** 2025-03-27 01:01:55.922439 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922457 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922462 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922467 | orchestrator | 2025-03-27 01:01:55.922472 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-03-27 01:01:55.922477 | orchestrator | Thursday 27 March 2025 00:53:17 +0000 (0:00:00.361) 0:05:25.577 ******** 2025-03-27 01:01:55.922481 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922486 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922491 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922496 | orchestrator | 2025-03-27 01:01:55.922501 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-03-27 01:01:55.922505 | orchestrator | Thursday 27 March 2025 00:53:17 +0000 (0:00:00.345) 0:05:25.922 ******** 2025-03-27 01:01:55.922510 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922515 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922520 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922524 | orchestrator | 2025-03-27 01:01:55.922529 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-03-27 01:01:55.922534 | orchestrator | Thursday 27 March 2025 00:53:18 +0000 (0:00:00.772) 0:05:26.695 ******** 2025-03-27 01:01:55.922539 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922543 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922548 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922556 | orchestrator | 2025-03-27 01:01:55.922561 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-03-27 01:01:55.922566 | orchestrator | Thursday 27 March 2025 00:53:19 +0000 (0:00:01.295) 0:05:27.991 ******** 2025-03-27 01:01:55.922571 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922576 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922581 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922586 | orchestrator | 2025-03-27 01:01:55.922590 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-03-27 01:01:55.922595 | orchestrator | Thursday 27 March 2025 00:53:19 +0000 (0:00:00.347) 0:05:28.338 ******** 2025-03-27 01:01:55.922600 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.922605 | orchestrator | 2025-03-27 01:01:55.922610 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-03-27 01:01:55.922615 | orchestrator | Thursday 27 March 2025 00:53:20 +0000 (0:00:00.932) 0:05:29.270 ******** 2025-03-27 01:01:55.922619 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922624 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922629 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922634 | orchestrator | 2025-03-27 01:01:55.922639 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-03-27 01:01:55.922644 | orchestrator | Thursday 27 March 2025 00:53:21 +0000 (0:00:00.392) 0:05:29.662 ******** 2025-03-27 01:01:55.922648 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922653 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922658 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922663 | orchestrator | 2025-03-27 01:01:55.922667 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-03-27 01:01:55.922672 | orchestrator | Thursday 27 March 2025 00:53:21 +0000 (0:00:00.392) 0:05:30.055 ******** 2025-03-27 01:01:55.922677 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.922682 | orchestrator | 2025-03-27 01:01:55.922687 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-03-27 01:01:55.922692 | orchestrator | Thursday 27 March 2025 00:53:22 +0000 (0:00:00.882) 0:05:30.938 ******** 2025-03-27 01:01:55.922696 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922704 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922709 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922714 | orchestrator | 2025-03-27 01:01:55.922719 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-03-27 01:01:55.922726 | orchestrator | Thursday 27 March 2025 00:53:23 +0000 (0:00:01.379) 0:05:32.318 ******** 2025-03-27 01:01:55.922731 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922735 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922740 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922745 | orchestrator | 2025-03-27 01:01:55.922750 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-03-27 01:01:55.922755 | orchestrator | Thursday 27 March 2025 00:53:25 +0000 (0:00:01.251) 0:05:33.569 ******** 2025-03-27 01:01:55.922759 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922764 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922769 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922774 | orchestrator | 2025-03-27 01:01:55.922778 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-03-27 01:01:55.922783 | orchestrator | Thursday 27 March 2025 00:53:27 +0000 (0:00:02.075) 0:05:35.645 ******** 2025-03-27 01:01:55.922788 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922793 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922797 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922802 | orchestrator | 2025-03-27 01:01:55.922807 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-03-27 01:01:55.922823 | orchestrator | Thursday 27 March 2025 00:53:29 +0000 (0:00:02.124) 0:05:37.770 ******** 2025-03-27 01:01:55.922829 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.922834 | orchestrator | 2025-03-27 01:01:55.922839 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-03-27 01:01:55.922843 | orchestrator | Thursday 27 March 2025 00:53:30 +0000 (0:00:00.847) 0:05:38.617 ******** 2025-03-27 01:01:55.922848 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-03-27 01:01:55.922853 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922858 | orchestrator | 2025-03-27 01:01:55.922863 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-03-27 01:01:55.922868 | orchestrator | Thursday 27 March 2025 00:53:51 +0000 (0:00:21.569) 0:06:00.187 ******** 2025-03-27 01:01:55.922872 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922877 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922882 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922887 | orchestrator | 2025-03-27 01:01:55.922892 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-03-27 01:01:55.922897 | orchestrator | Thursday 27 March 2025 00:53:59 +0000 (0:00:07.635) 0:06:07.822 ******** 2025-03-27 01:01:55.922901 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.922906 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.922911 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.922916 | orchestrator | 2025-03-27 01:01:55.922920 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.922925 | orchestrator | Thursday 27 March 2025 00:54:00 +0000 (0:00:01.258) 0:06:09.081 ******** 2025-03-27 01:01:55.922930 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.922935 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.922940 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.922944 | orchestrator | 2025-03-27 01:01:55.922949 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-03-27 01:01:55.922954 | orchestrator | Thursday 27 March 2025 00:54:01 +0000 (0:00:00.764) 0:06:09.845 ******** 2025-03-27 01:01:55.922959 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.922967 | orchestrator | 2025-03-27 01:01:55.922972 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-03-27 01:01:55.922977 | orchestrator | Thursday 27 March 2025 00:54:02 +0000 (0:00:00.970) 0:06:10.816 ******** 2025-03-27 01:01:55.922982 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.922987 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.922991 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.922996 | orchestrator | 2025-03-27 01:01:55.923001 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-03-27 01:01:55.923006 | orchestrator | Thursday 27 March 2025 00:54:02 +0000 (0:00:00.425) 0:06:11.241 ******** 2025-03-27 01:01:55.923011 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.923015 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.923020 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.923025 | orchestrator | 2025-03-27 01:01:55.923030 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-03-27 01:01:55.923034 | orchestrator | Thursday 27 March 2025 00:54:04 +0000 (0:00:01.371) 0:06:12.613 ******** 2025-03-27 01:01:55.923039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.923044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.923049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.923054 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923058 | orchestrator | 2025-03-27 01:01:55.923063 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-03-27 01:01:55.923068 | orchestrator | Thursday 27 March 2025 00:54:05 +0000 (0:00:01.255) 0:06:13.868 ******** 2025-03-27 01:01:55.923073 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923078 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923082 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923087 | orchestrator | 2025-03-27 01:01:55.923092 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.923097 | orchestrator | Thursday 27 March 2025 00:54:05 +0000 (0:00:00.421) 0:06:14.290 ******** 2025-03-27 01:01:55.923101 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.923106 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.923111 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.923116 | orchestrator | 2025-03-27 01:01:55.923120 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-03-27 01:01:55.923125 | orchestrator | 2025-03-27 01:01:55.923130 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.923135 | orchestrator | Thursday 27 March 2025 00:54:08 +0000 (0:00:02.404) 0:06:16.694 ******** 2025-03-27 01:01:55.923139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.923144 | orchestrator | 2025-03-27 01:01:55.923149 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.923154 | orchestrator | Thursday 27 March 2025 00:54:09 +0000 (0:00:00.922) 0:06:17.617 ******** 2025-03-27 01:01:55.923159 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923163 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923168 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923173 | orchestrator | 2025-03-27 01:01:55.923180 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.923185 | orchestrator | Thursday 27 March 2025 00:54:09 +0000 (0:00:00.758) 0:06:18.375 ******** 2025-03-27 01:01:55.923190 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923195 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923200 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923207 | orchestrator | 2025-03-27 01:01:55.923212 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.923227 | orchestrator | Thursday 27 March 2025 00:54:10 +0000 (0:00:00.372) 0:06:18.747 ******** 2025-03-27 01:01:55.923233 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923241 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923246 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923251 | orchestrator | 2025-03-27 01:01:55.923255 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.923260 | orchestrator | Thursday 27 March 2025 00:54:10 +0000 (0:00:00.642) 0:06:19.390 ******** 2025-03-27 01:01:55.923265 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923272 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923277 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923282 | orchestrator | 2025-03-27 01:01:55.923287 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.923292 | orchestrator | Thursday 27 March 2025 00:54:11 +0000 (0:00:00.351) 0:06:19.742 ******** 2025-03-27 01:01:55.923297 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923301 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923306 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923311 | orchestrator | 2025-03-27 01:01:55.923316 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.923321 | orchestrator | Thursday 27 March 2025 00:54:12 +0000 (0:00:00.753) 0:06:20.495 ******** 2025-03-27 01:01:55.923325 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923330 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923335 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923340 | orchestrator | 2025-03-27 01:01:55.923344 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.923349 | orchestrator | Thursday 27 March 2025 00:54:12 +0000 (0:00:00.410) 0:06:20.905 ******** 2025-03-27 01:01:55.923354 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923359 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923364 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923368 | orchestrator | 2025-03-27 01:01:55.923373 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.923378 | orchestrator | Thursday 27 March 2025 00:54:13 +0000 (0:00:00.643) 0:06:21.549 ******** 2025-03-27 01:01:55.923383 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923388 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923392 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923397 | orchestrator | 2025-03-27 01:01:55.923402 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.923407 | orchestrator | Thursday 27 March 2025 00:54:13 +0000 (0:00:00.369) 0:06:21.918 ******** 2025-03-27 01:01:55.923411 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923416 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923421 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923426 | orchestrator | 2025-03-27 01:01:55.923431 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.923435 | orchestrator | Thursday 27 March 2025 00:54:13 +0000 (0:00:00.369) 0:06:22.287 ******** 2025-03-27 01:01:55.923467 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923473 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923478 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923483 | orchestrator | 2025-03-27 01:01:55.923488 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.923492 | orchestrator | Thursday 27 March 2025 00:54:14 +0000 (0:00:00.669) 0:06:22.957 ******** 2025-03-27 01:01:55.923497 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923502 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923507 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923512 | orchestrator | 2025-03-27 01:01:55.923517 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.923521 | orchestrator | Thursday 27 March 2025 00:54:15 +0000 (0:00:00.809) 0:06:23.767 ******** 2025-03-27 01:01:55.923526 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923531 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923541 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923546 | orchestrator | 2025-03-27 01:01:55.923551 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.923556 | orchestrator | Thursday 27 March 2025 00:54:15 +0000 (0:00:00.356) 0:06:24.124 ******** 2025-03-27 01:01:55.923560 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923565 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923570 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923575 | orchestrator | 2025-03-27 01:01:55.923580 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.923584 | orchestrator | Thursday 27 March 2025 00:54:16 +0000 (0:00:00.394) 0:06:24.519 ******** 2025-03-27 01:01:55.923589 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923594 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923599 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923604 | orchestrator | 2025-03-27 01:01:55.923608 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.923613 | orchestrator | Thursday 27 March 2025 00:54:16 +0000 (0:00:00.645) 0:06:25.164 ******** 2025-03-27 01:01:55.923618 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923623 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923628 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923633 | orchestrator | 2025-03-27 01:01:55.923637 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.923642 | orchestrator | Thursday 27 March 2025 00:54:17 +0000 (0:00:00.397) 0:06:25.562 ******** 2025-03-27 01:01:55.923647 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923652 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923657 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923661 | orchestrator | 2025-03-27 01:01:55.923666 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.923673 | orchestrator | Thursday 27 March 2025 00:54:17 +0000 (0:00:00.378) 0:06:25.941 ******** 2025-03-27 01:01:55.923678 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923683 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923688 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923693 | orchestrator | 2025-03-27 01:01:55.923698 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.923715 | orchestrator | Thursday 27 March 2025 00:54:17 +0000 (0:00:00.372) 0:06:26.314 ******** 2025-03-27 01:01:55.923721 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923725 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923730 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923735 | orchestrator | 2025-03-27 01:01:55.923740 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.923745 | orchestrator | Thursday 27 March 2025 00:54:18 +0000 (0:00:00.688) 0:06:27.002 ******** 2025-03-27 01:01:55.923750 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923754 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923759 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923766 | orchestrator | 2025-03-27 01:01:55.923771 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.923776 | orchestrator | Thursday 27 March 2025 00:54:18 +0000 (0:00:00.388) 0:06:27.390 ******** 2025-03-27 01:01:55.923781 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.923786 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.923791 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.923796 | orchestrator | 2025-03-27 01:01:55.923800 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.923805 | orchestrator | Thursday 27 March 2025 00:54:19 +0000 (0:00:00.390) 0:06:27.781 ******** 2025-03-27 01:01:55.923810 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923815 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923820 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923827 | orchestrator | 2025-03-27 01:01:55.923832 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.923837 | orchestrator | Thursday 27 March 2025 00:54:19 +0000 (0:00:00.417) 0:06:28.199 ******** 2025-03-27 01:01:55.923842 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923847 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923852 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923856 | orchestrator | 2025-03-27 01:01:55.923861 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.923866 | orchestrator | Thursday 27 March 2025 00:54:20 +0000 (0:00:00.689) 0:06:28.888 ******** 2025-03-27 01:01:55.923871 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923876 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923880 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923885 | orchestrator | 2025-03-27 01:01:55.923890 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.923895 | orchestrator | Thursday 27 March 2025 00:54:20 +0000 (0:00:00.360) 0:06:29.249 ******** 2025-03-27 01:01:55.923900 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923904 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923909 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923914 | orchestrator | 2025-03-27 01:01:55.923919 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.923924 | orchestrator | Thursday 27 March 2025 00:54:21 +0000 (0:00:00.385) 0:06:29.634 ******** 2025-03-27 01:01:55.923928 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923933 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923938 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923942 | orchestrator | 2025-03-27 01:01:55.923947 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.923952 | orchestrator | Thursday 27 March 2025 00:54:21 +0000 (0:00:00.372) 0:06:30.007 ******** 2025-03-27 01:01:55.923957 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923962 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923967 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.923971 | orchestrator | 2025-03-27 01:01:55.923976 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.923981 | orchestrator | Thursday 27 March 2025 00:54:22 +0000 (0:00:00.667) 0:06:30.675 ******** 2025-03-27 01:01:55.923986 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.923991 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.923995 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924000 | orchestrator | 2025-03-27 01:01:55.924005 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.924010 | orchestrator | Thursday 27 March 2025 00:54:22 +0000 (0:00:00.400) 0:06:31.076 ******** 2025-03-27 01:01:55.924015 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924020 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924024 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924029 | orchestrator | 2025-03-27 01:01:55.924034 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.924039 | orchestrator | Thursday 27 March 2025 00:54:22 +0000 (0:00:00.365) 0:06:31.441 ******** 2025-03-27 01:01:55.924043 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924048 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924053 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924058 | orchestrator | 2025-03-27 01:01:55.924063 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.924068 | orchestrator | Thursday 27 March 2025 00:54:23 +0000 (0:00:00.418) 0:06:31.860 ******** 2025-03-27 01:01:55.924073 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924077 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924082 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924090 | orchestrator | 2025-03-27 01:01:55.924095 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.924100 | orchestrator | Thursday 27 March 2025 00:54:24 +0000 (0:00:00.670) 0:06:32.530 ******** 2025-03-27 01:01:55.924105 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924109 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924114 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924119 | orchestrator | 2025-03-27 01:01:55.924124 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.924129 | orchestrator | Thursday 27 March 2025 00:54:24 +0000 (0:00:00.406) 0:06:32.936 ******** 2025-03-27 01:01:55.924134 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924138 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924143 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924148 | orchestrator | 2025-03-27 01:01:55.924164 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.924170 | orchestrator | Thursday 27 March 2025 00:54:24 +0000 (0:00:00.371) 0:06:33.308 ******** 2025-03-27 01:01:55.924175 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.924179 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.924184 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924189 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.924194 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.924199 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924203 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.924208 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.924213 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924218 | orchestrator | 2025-03-27 01:01:55.924223 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.924227 | orchestrator | Thursday 27 March 2025 00:54:25 +0000 (0:00:00.407) 0:06:33.716 ******** 2025-03-27 01:01:55.924232 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-27 01:01:55.924237 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-27 01:01:55.924242 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924247 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-27 01:01:55.924251 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-27 01:01:55.924256 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924261 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-27 01:01:55.924266 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-27 01:01:55.924271 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924275 | orchestrator | 2025-03-27 01:01:55.924280 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.924287 | orchestrator | Thursday 27 March 2025 00:54:26 +0000 (0:00:00.810) 0:06:34.526 ******** 2025-03-27 01:01:55.924437 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924458 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924465 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924473 | orchestrator | 2025-03-27 01:01:55.924481 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.924488 | orchestrator | Thursday 27 March 2025 00:54:26 +0000 (0:00:00.411) 0:06:34.938 ******** 2025-03-27 01:01:55.924493 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924497 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924502 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924511 | orchestrator | 2025-03-27 01:01:55.924516 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.924522 | orchestrator | Thursday 27 March 2025 00:54:26 +0000 (0:00:00.366) 0:06:35.304 ******** 2025-03-27 01:01:55.924533 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924537 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924542 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924547 | orchestrator | 2025-03-27 01:01:55.924552 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.924557 | orchestrator | Thursday 27 March 2025 00:54:27 +0000 (0:00:00.354) 0:06:35.659 ******** 2025-03-27 01:01:55.924562 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924566 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924571 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924576 | orchestrator | 2025-03-27 01:01:55.924581 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.924585 | orchestrator | Thursday 27 March 2025 00:54:27 +0000 (0:00:00.684) 0:06:36.343 ******** 2025-03-27 01:01:55.924590 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924595 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924600 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924604 | orchestrator | 2025-03-27 01:01:55.924609 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.924614 | orchestrator | Thursday 27 March 2025 00:54:28 +0000 (0:00:00.396) 0:06:36.740 ******** 2025-03-27 01:01:55.924619 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924623 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924628 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924633 | orchestrator | 2025-03-27 01:01:55.924638 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.924643 | orchestrator | Thursday 27 March 2025 00:54:28 +0000 (0:00:00.405) 0:06:37.145 ******** 2025-03-27 01:01:55.924647 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.924652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.924657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.924662 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924666 | orchestrator | 2025-03-27 01:01:55.924671 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.924676 | orchestrator | Thursday 27 March 2025 00:54:29 +0000 (0:00:00.472) 0:06:37.618 ******** 2025-03-27 01:01:55.924681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.924685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.924690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.924695 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924700 | orchestrator | 2025-03-27 01:01:55.924704 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.924709 | orchestrator | Thursday 27 March 2025 00:54:29 +0000 (0:00:00.772) 0:06:38.390 ******** 2025-03-27 01:01:55.924714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.924719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.924724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.924748 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924754 | orchestrator | 2025-03-27 01:01:55.924759 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.924764 | orchestrator | Thursday 27 March 2025 00:54:31 +0000 (0:00:01.082) 0:06:39.472 ******** 2025-03-27 01:01:55.924769 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924773 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924778 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924783 | orchestrator | 2025-03-27 01:01:55.924788 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.924792 | orchestrator | Thursday 27 March 2025 00:54:31 +0000 (0:00:00.394) 0:06:39.867 ******** 2025-03-27 01:01:55.924813 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.924822 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924827 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.924832 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924837 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.924842 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924846 | orchestrator | 2025-03-27 01:01:55.924851 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.924856 | orchestrator | Thursday 27 March 2025 00:54:32 +0000 (0:00:00.847) 0:06:40.715 ******** 2025-03-27 01:01:55.924861 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924866 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924870 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924875 | orchestrator | 2025-03-27 01:01:55.924880 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.924885 | orchestrator | Thursday 27 March 2025 00:54:32 +0000 (0:00:00.440) 0:06:41.156 ******** 2025-03-27 01:01:55.924890 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924894 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924899 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924904 | orchestrator | 2025-03-27 01:01:55.924909 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.924914 | orchestrator | Thursday 27 March 2025 00:54:33 +0000 (0:00:00.713) 0:06:41.869 ******** 2025-03-27 01:01:55.924918 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.924923 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924928 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.924933 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924938 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.924942 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924947 | orchestrator | 2025-03-27 01:01:55.924952 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.924957 | orchestrator | Thursday 27 March 2025 00:54:34 +0000 (0:00:00.580) 0:06:42.450 ******** 2025-03-27 01:01:55.924961 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.924966 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.924971 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.924976 | orchestrator | 2025-03-27 01:01:55.924981 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.924985 | orchestrator | Thursday 27 March 2025 00:54:34 +0000 (0:00:00.385) 0:06:42.835 ******** 2025-03-27 01:01:55.924990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.924995 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.925000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.925005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 01:01:55.925009 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 01:01:55.925014 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 01:01:55.925019 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925024 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925029 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 01:01:55.925033 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 01:01:55.925038 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 01:01:55.925043 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925048 | orchestrator | 2025-03-27 01:01:55.925052 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.925057 | orchestrator | Thursday 27 March 2025 00:54:35 +0000 (0:00:00.972) 0:06:43.807 ******** 2025-03-27 01:01:55.925062 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925067 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925075 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925080 | orchestrator | 2025-03-27 01:01:55.925087 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.925092 | orchestrator | Thursday 27 March 2025 00:54:36 +0000 (0:00:00.711) 0:06:44.519 ******** 2025-03-27 01:01:55.925097 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925101 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925106 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925111 | orchestrator | 2025-03-27 01:01:55.925116 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.925120 | orchestrator | Thursday 27 March 2025 00:54:37 +0000 (0:00:00.939) 0:06:45.459 ******** 2025-03-27 01:01:55.925125 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925130 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925135 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925140 | orchestrator | 2025-03-27 01:01:55.925144 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.925149 | orchestrator | Thursday 27 March 2025 00:54:37 +0000 (0:00:00.594) 0:06:46.053 ******** 2025-03-27 01:01:55.925154 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925159 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925163 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925168 | orchestrator | 2025-03-27 01:01:55.925173 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-03-27 01:01:55.925189 | orchestrator | Thursday 27 March 2025 00:54:38 +0000 (0:00:00.904) 0:06:46.958 ******** 2025-03-27 01:01:55.925195 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:01:55.925200 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.925205 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.925210 | orchestrator | 2025-03-27 01:01:55.925215 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-03-27 01:01:55.925219 | orchestrator | Thursday 27 March 2025 00:54:39 +0000 (0:00:00.709) 0:06:47.667 ******** 2025-03-27 01:01:55.925224 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.925229 | orchestrator | 2025-03-27 01:01:55.925234 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-03-27 01:01:55.925239 | orchestrator | Thursday 27 March 2025 00:54:39 +0000 (0:00:00.631) 0:06:48.299 ******** 2025-03-27 01:01:55.925243 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.925248 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.925253 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.925258 | orchestrator | 2025-03-27 01:01:55.925263 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-03-27 01:01:55.925267 | orchestrator | Thursday 27 March 2025 00:54:40 +0000 (0:00:01.002) 0:06:49.301 ******** 2025-03-27 01:01:55.925272 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925281 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925286 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925291 | orchestrator | 2025-03-27 01:01:55.925295 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-03-27 01:01:55.925300 | orchestrator | Thursday 27 March 2025 00:54:41 +0000 (0:00:00.378) 0:06:49.680 ******** 2025-03-27 01:01:55.925305 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:01:55.925310 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:01:55.925315 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:01:55.925320 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-03-27 01:01:55.925325 | orchestrator | 2025-03-27 01:01:55.925329 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-03-27 01:01:55.925334 | orchestrator | Thursday 27 March 2025 00:54:49 +0000 (0:00:08.363) 0:06:58.044 ******** 2025-03-27 01:01:55.925343 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.925348 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.925353 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.925358 | orchestrator | 2025-03-27 01:01:55.925363 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-03-27 01:01:55.925367 | orchestrator | Thursday 27 March 2025 00:54:50 +0000 (0:00:00.594) 0:06:58.639 ******** 2025-03-27 01:01:55.925372 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-27 01:01:55.925377 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-27 01:01:55.925382 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-27 01:01:55.925387 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-03-27 01:01:55.925392 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:01:55.925396 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:01:55.925401 | orchestrator | 2025-03-27 01:01:55.925406 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-03-27 01:01:55.925411 | orchestrator | Thursday 27 March 2025 00:54:52 +0000 (0:00:01.848) 0:07:00.488 ******** 2025-03-27 01:01:55.925416 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-27 01:01:55.925420 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-27 01:01:55.925425 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-27 01:01:55.925430 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:01:55.925435 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-27 01:01:55.925472 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-27 01:01:55.925478 | orchestrator | 2025-03-27 01:01:55.925483 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-03-27 01:01:55.925488 | orchestrator | Thursday 27 March 2025 00:54:53 +0000 (0:00:01.332) 0:07:01.821 ******** 2025-03-27 01:01:55.925492 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.925497 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.925502 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.925507 | orchestrator | 2025-03-27 01:01:55.925512 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-03-27 01:01:55.925517 | orchestrator | Thursday 27 March 2025 00:54:54 +0000 (0:00:01.054) 0:07:02.875 ******** 2025-03-27 01:01:55.925521 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925526 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925531 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925536 | orchestrator | 2025-03-27 01:01:55.925541 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-03-27 01:01:55.925545 | orchestrator | Thursday 27 March 2025 00:54:54 +0000 (0:00:00.379) 0:07:03.254 ******** 2025-03-27 01:01:55.925550 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925555 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925560 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925564 | orchestrator | 2025-03-27 01:01:55.925572 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-03-27 01:01:55.925577 | orchestrator | Thursday 27 March 2025 00:54:55 +0000 (0:00:00.389) 0:07:03.644 ******** 2025-03-27 01:01:55.925582 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.925587 | orchestrator | 2025-03-27 01:01:55.925592 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-03-27 01:01:55.925597 | orchestrator | Thursday 27 March 2025 00:54:56 +0000 (0:00:00.931) 0:07:04.575 ******** 2025-03-27 01:01:55.925601 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925606 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925623 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925629 | orchestrator | 2025-03-27 01:01:55.925634 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-03-27 01:01:55.925646 | orchestrator | Thursday 27 March 2025 00:54:56 +0000 (0:00:00.398) 0:07:04.973 ******** 2025-03-27 01:01:55.925651 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925656 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925661 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.925665 | orchestrator | 2025-03-27 01:01:55.925670 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-03-27 01:01:55.925675 | orchestrator | Thursday 27 March 2025 00:54:56 +0000 (0:00:00.364) 0:07:05.338 ******** 2025-03-27 01:01:55.925680 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.925685 | orchestrator | 2025-03-27 01:01:55.925690 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-03-27 01:01:55.925695 | orchestrator | Thursday 27 March 2025 00:54:57 +0000 (0:00:00.866) 0:07:06.204 ******** 2025-03-27 01:01:55.925699 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.925704 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.925709 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.925714 | orchestrator | 2025-03-27 01:01:55.925719 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-03-27 01:01:55.925724 | orchestrator | Thursday 27 March 2025 00:54:59 +0000 (0:00:01.255) 0:07:07.459 ******** 2025-03-27 01:01:55.925728 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.925733 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.925738 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.925743 | orchestrator | 2025-03-27 01:01:55.925747 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-03-27 01:01:55.925752 | orchestrator | Thursday 27 March 2025 00:55:00 +0000 (0:00:01.265) 0:07:08.725 ******** 2025-03-27 01:01:55.925757 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.925762 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.925767 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.925771 | orchestrator | 2025-03-27 01:01:55.925776 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-03-27 01:01:55.925781 | orchestrator | Thursday 27 March 2025 00:55:02 +0000 (0:00:01.960) 0:07:10.685 ******** 2025-03-27 01:01:55.925786 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.925791 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.925795 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.925800 | orchestrator | 2025-03-27 01:01:55.925805 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-03-27 01:01:55.925810 | orchestrator | Thursday 27 March 2025 00:55:04 +0000 (0:00:02.169) 0:07:12.854 ******** 2025-03-27 01:01:55.925815 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.925819 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.925824 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-03-27 01:01:55.925830 | orchestrator | 2025-03-27 01:01:55.925834 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-03-27 01:01:55.925839 | orchestrator | Thursday 27 March 2025 00:55:05 +0000 (0:00:00.730) 0:07:13.585 ******** 2025-03-27 01:01:55.925844 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-03-27 01:01:55.925849 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-03-27 01:01:55.925854 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.925859 | orchestrator | 2025-03-27 01:01:55.925863 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-03-27 01:01:55.925868 | orchestrator | Thursday 27 March 2025 00:55:18 +0000 (0:00:13.839) 0:07:27.424 ******** 2025-03-27 01:01:55.925873 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.925878 | orchestrator | 2025-03-27 01:01:55.925883 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-03-27 01:01:55.925891 | orchestrator | Thursday 27 March 2025 00:55:20 +0000 (0:00:01.807) 0:07:29.231 ******** 2025-03-27 01:01:55.925896 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.925901 | orchestrator | 2025-03-27 01:01:55.925905 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-03-27 01:01:55.925910 | orchestrator | Thursday 27 March 2025 00:55:21 +0000 (0:00:00.485) 0:07:29.717 ******** 2025-03-27 01:01:55.925915 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.925920 | orchestrator | 2025-03-27 01:01:55.925925 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-03-27 01:01:55.925929 | orchestrator | Thursday 27 March 2025 00:55:21 +0000 (0:00:00.310) 0:07:30.027 ******** 2025-03-27 01:01:55.925934 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-03-27 01:01:55.925939 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-03-27 01:01:55.925946 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-03-27 01:01:55.925951 | orchestrator | 2025-03-27 01:01:55.925956 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-03-27 01:01:55.925961 | orchestrator | Thursday 27 March 2025 00:55:28 +0000 (0:00:06.691) 0:07:36.718 ******** 2025-03-27 01:01:55.925966 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-03-27 01:01:55.925971 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-03-27 01:01:55.925976 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-03-27 01:01:55.925980 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-03-27 01:01:55.925985 | orchestrator | 2025-03-27 01:01:55.925990 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.926006 | orchestrator | Thursday 27 March 2025 00:55:33 +0000 (0:00:05.070) 0:07:41.789 ******** 2025-03-27 01:01:55.926012 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.926042 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.926047 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.926052 | orchestrator | 2025-03-27 01:01:55.926057 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-03-27 01:01:55.926062 | orchestrator | Thursday 27 March 2025 00:55:34 +0000 (0:00:00.745) 0:07:42.535 ******** 2025-03-27 01:01:55.926066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:01:55.926071 | orchestrator | 2025-03-27 01:01:55.926076 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-03-27 01:01:55.926081 | orchestrator | Thursday 27 March 2025 00:55:34 +0000 (0:00:00.889) 0:07:43.424 ******** 2025-03-27 01:01:55.926086 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.926091 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.926096 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.926101 | orchestrator | 2025-03-27 01:01:55.926106 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-03-27 01:01:55.926110 | orchestrator | Thursday 27 March 2025 00:55:35 +0000 (0:00:00.429) 0:07:43.854 ******** 2025-03-27 01:01:55.926115 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.926120 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.926125 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.926130 | orchestrator | 2025-03-27 01:01:55.926134 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-03-27 01:01:55.926139 | orchestrator | Thursday 27 March 2025 00:55:37 +0000 (0:00:01.617) 0:07:45.471 ******** 2025-03-27 01:01:55.926144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:01:55.926149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:01:55.926154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:01:55.926159 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.926167 | orchestrator | 2025-03-27 01:01:55.926172 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-03-27 01:01:55.926177 | orchestrator | Thursday 27 March 2025 00:55:37 +0000 (0:00:00.842) 0:07:46.313 ******** 2025-03-27 01:01:55.926181 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.926186 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.926191 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.926199 | orchestrator | 2025-03-27 01:01:55.926204 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.926209 | orchestrator | Thursday 27 March 2025 00:55:38 +0000 (0:00:00.415) 0:07:46.728 ******** 2025-03-27 01:01:55.926213 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.926218 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.926223 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.926228 | orchestrator | 2025-03-27 01:01:55.926233 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-03-27 01:01:55.926237 | orchestrator | 2025-03-27 01:01:55.926242 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.926247 | orchestrator | Thursday 27 March 2025 00:55:40 +0000 (0:00:02.209) 0:07:48.937 ******** 2025-03-27 01:01:55.926252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.926257 | orchestrator | 2025-03-27 01:01:55.926262 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.926267 | orchestrator | Thursday 27 March 2025 00:55:41 +0000 (0:00:00.892) 0:07:49.830 ******** 2025-03-27 01:01:55.926272 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926276 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926281 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926286 | orchestrator | 2025-03-27 01:01:55.926291 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.926296 | orchestrator | Thursday 27 March 2025 00:55:41 +0000 (0:00:00.342) 0:07:50.173 ******** 2025-03-27 01:01:55.926300 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926305 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926310 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926315 | orchestrator | 2025-03-27 01:01:55.926320 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.926325 | orchestrator | Thursday 27 March 2025 00:55:42 +0000 (0:00:00.762) 0:07:50.935 ******** 2025-03-27 01:01:55.926329 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926334 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926339 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926344 | orchestrator | 2025-03-27 01:01:55.926349 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.926354 | orchestrator | Thursday 27 March 2025 00:55:43 +0000 (0:00:01.228) 0:07:52.164 ******** 2025-03-27 01:01:55.926358 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926363 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926368 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926373 | orchestrator | 2025-03-27 01:01:55.926378 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.926382 | orchestrator | Thursday 27 March 2025 00:55:44 +0000 (0:00:00.913) 0:07:53.077 ******** 2025-03-27 01:01:55.926387 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926392 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926397 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926402 | orchestrator | 2025-03-27 01:01:55.926409 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.926414 | orchestrator | Thursday 27 March 2025 00:55:44 +0000 (0:00:00.358) 0:07:53.436 ******** 2025-03-27 01:01:55.926419 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926424 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926429 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926437 | orchestrator | 2025-03-27 01:01:55.926455 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.926461 | orchestrator | Thursday 27 March 2025 00:55:45 +0000 (0:00:00.660) 0:07:54.096 ******** 2025-03-27 01:01:55.926466 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926483 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926489 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926493 | orchestrator | 2025-03-27 01:01:55.926498 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.926503 | orchestrator | Thursday 27 March 2025 00:55:45 +0000 (0:00:00.328) 0:07:54.424 ******** 2025-03-27 01:01:55.926508 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926513 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926518 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926522 | orchestrator | 2025-03-27 01:01:55.926527 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.926532 | orchestrator | Thursday 27 March 2025 00:55:46 +0000 (0:00:00.368) 0:07:54.793 ******** 2025-03-27 01:01:55.926537 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926542 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926547 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926552 | orchestrator | 2025-03-27 01:01:55.926557 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.926561 | orchestrator | Thursday 27 March 2025 00:55:46 +0000 (0:00:00.370) 0:07:55.164 ******** 2025-03-27 01:01:55.926566 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926571 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926576 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926581 | orchestrator | 2025-03-27 01:01:55.926586 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.926590 | orchestrator | Thursday 27 March 2025 00:55:47 +0000 (0:00:00.671) 0:07:55.835 ******** 2025-03-27 01:01:55.926595 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926600 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926605 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926610 | orchestrator | 2025-03-27 01:01:55.926615 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.926619 | orchestrator | Thursday 27 March 2025 00:55:48 +0000 (0:00:00.764) 0:07:56.600 ******** 2025-03-27 01:01:55.926624 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926629 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926634 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926639 | orchestrator | 2025-03-27 01:01:55.926643 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.926648 | orchestrator | Thursday 27 March 2025 00:55:48 +0000 (0:00:00.347) 0:07:56.948 ******** 2025-03-27 01:01:55.926653 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926658 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926663 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926667 | orchestrator | 2025-03-27 01:01:55.926672 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.926677 | orchestrator | Thursday 27 March 2025 00:55:48 +0000 (0:00:00.331) 0:07:57.279 ******** 2025-03-27 01:01:55.926682 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926687 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926691 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926696 | orchestrator | 2025-03-27 01:01:55.926701 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.926706 | orchestrator | Thursday 27 March 2025 00:55:49 +0000 (0:00:00.667) 0:07:57.946 ******** 2025-03-27 01:01:55.926711 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926715 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926720 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926725 | orchestrator | 2025-03-27 01:01:55.926730 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.926738 | orchestrator | Thursday 27 March 2025 00:55:49 +0000 (0:00:00.390) 0:07:58.336 ******** 2025-03-27 01:01:55.926743 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926748 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926753 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926760 | orchestrator | 2025-03-27 01:01:55.926765 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.926770 | orchestrator | Thursday 27 March 2025 00:55:50 +0000 (0:00:00.394) 0:07:58.731 ******** 2025-03-27 01:01:55.926775 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926780 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926784 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926789 | orchestrator | 2025-03-27 01:01:55.926794 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.926799 | orchestrator | Thursday 27 March 2025 00:55:50 +0000 (0:00:00.313) 0:07:59.044 ******** 2025-03-27 01:01:55.926804 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926808 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926813 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926818 | orchestrator | 2025-03-27 01:01:55.926823 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.926828 | orchestrator | Thursday 27 March 2025 00:55:51 +0000 (0:00:00.621) 0:07:59.666 ******** 2025-03-27 01:01:55.926832 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926837 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926842 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926847 | orchestrator | 2025-03-27 01:01:55.926851 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.926856 | orchestrator | Thursday 27 March 2025 00:55:51 +0000 (0:00:00.364) 0:08:00.030 ******** 2025-03-27 01:01:55.926861 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.926866 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.926871 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.926875 | orchestrator | 2025-03-27 01:01:55.926880 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.926885 | orchestrator | Thursday 27 March 2025 00:55:51 +0000 (0:00:00.370) 0:08:00.401 ******** 2025-03-27 01:01:55.926890 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926895 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926899 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926904 | orchestrator | 2025-03-27 01:01:55.926911 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.926916 | orchestrator | Thursday 27 March 2025 00:55:52 +0000 (0:00:00.358) 0:08:00.760 ******** 2025-03-27 01:01:55.926921 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926926 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926942 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926948 | orchestrator | 2025-03-27 01:01:55.926953 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.926958 | orchestrator | Thursday 27 March 2025 00:55:53 +0000 (0:00:00.743) 0:08:01.503 ******** 2025-03-27 01:01:55.926963 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926968 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.926972 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.926977 | orchestrator | 2025-03-27 01:01:55.926982 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.926987 | orchestrator | Thursday 27 March 2025 00:55:53 +0000 (0:00:00.375) 0:08:01.879 ******** 2025-03-27 01:01:55.926992 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.926997 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927001 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927006 | orchestrator | 2025-03-27 01:01:55.927011 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.927016 | orchestrator | Thursday 27 March 2025 00:55:53 +0000 (0:00:00.342) 0:08:02.222 ******** 2025-03-27 01:01:55.927024 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927029 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927033 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927038 | orchestrator | 2025-03-27 01:01:55.927043 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.927048 | orchestrator | Thursday 27 March 2025 00:55:54 +0000 (0:00:00.420) 0:08:02.643 ******** 2025-03-27 01:01:55.927053 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927058 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927062 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927067 | orchestrator | 2025-03-27 01:01:55.927072 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.927077 | orchestrator | Thursday 27 March 2025 00:55:54 +0000 (0:00:00.651) 0:08:03.295 ******** 2025-03-27 01:01:55.927082 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927087 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927091 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927096 | orchestrator | 2025-03-27 01:01:55.927101 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.927106 | orchestrator | Thursday 27 March 2025 00:55:55 +0000 (0:00:00.410) 0:08:03.706 ******** 2025-03-27 01:01:55.927111 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927115 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927120 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927125 | orchestrator | 2025-03-27 01:01:55.927130 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.927135 | orchestrator | Thursday 27 March 2025 00:55:55 +0000 (0:00:00.430) 0:08:04.137 ******** 2025-03-27 01:01:55.927140 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927145 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927149 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927154 | orchestrator | 2025-03-27 01:01:55.927159 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.927164 | orchestrator | Thursday 27 March 2025 00:55:56 +0000 (0:00:00.402) 0:08:04.539 ******** 2025-03-27 01:01:55.927169 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927173 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927178 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927183 | orchestrator | 2025-03-27 01:01:55.927188 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.927193 | orchestrator | Thursday 27 March 2025 00:55:56 +0000 (0:00:00.765) 0:08:05.305 ******** 2025-03-27 01:01:55.927198 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927202 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927207 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927212 | orchestrator | 2025-03-27 01:01:55.927217 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.927222 | orchestrator | Thursday 27 March 2025 00:55:57 +0000 (0:00:00.484) 0:08:05.789 ******** 2025-03-27 01:01:55.927226 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927231 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927236 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927241 | orchestrator | 2025-03-27 01:01:55.927246 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.927251 | orchestrator | Thursday 27 March 2025 00:55:57 +0000 (0:00:00.386) 0:08:06.175 ******** 2025-03-27 01:01:55.927255 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.927260 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.927265 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927270 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.927277 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.927282 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927287 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.927292 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.927297 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927304 | orchestrator | 2025-03-27 01:01:55.927309 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.927314 | orchestrator | Thursday 27 March 2025 00:55:58 +0000 (0:00:00.452) 0:08:06.628 ******** 2025-03-27 01:01:55.927318 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-27 01:01:55.927326 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-27 01:01:55.927331 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927336 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-27 01:01:55.927340 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-27 01:01:55.927345 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927350 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-27 01:01:55.927366 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-27 01:01:55.927372 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927377 | orchestrator | 2025-03-27 01:01:55.927382 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.927386 | orchestrator | Thursday 27 March 2025 00:55:58 +0000 (0:00:00.717) 0:08:07.345 ******** 2025-03-27 01:01:55.927391 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927396 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927401 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927406 | orchestrator | 2025-03-27 01:01:55.927411 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.927415 | orchestrator | Thursday 27 March 2025 00:55:59 +0000 (0:00:00.365) 0:08:07.710 ******** 2025-03-27 01:01:55.927420 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927425 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927430 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927434 | orchestrator | 2025-03-27 01:01:55.927452 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.927457 | orchestrator | Thursday 27 March 2025 00:55:59 +0000 (0:00:00.382) 0:08:08.092 ******** 2025-03-27 01:01:55.927462 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927467 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927471 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927476 | orchestrator | 2025-03-27 01:01:55.927481 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.927486 | orchestrator | Thursday 27 March 2025 00:56:00 +0000 (0:00:00.383) 0:08:08.476 ******** 2025-03-27 01:01:55.927491 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927496 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927501 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927505 | orchestrator | 2025-03-27 01:01:55.927510 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.927515 | orchestrator | Thursday 27 March 2025 00:56:00 +0000 (0:00:00.677) 0:08:09.154 ******** 2025-03-27 01:01:55.927520 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927525 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927529 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927534 | orchestrator | 2025-03-27 01:01:55.927539 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.927546 | orchestrator | Thursday 27 March 2025 00:56:01 +0000 (0:00:00.385) 0:08:09.539 ******** 2025-03-27 01:01:55.927551 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927556 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927566 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927570 | orchestrator | 2025-03-27 01:01:55.927575 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.927580 | orchestrator | Thursday 27 March 2025 00:56:01 +0000 (0:00:00.381) 0:08:09.921 ******** 2025-03-27 01:01:55.927585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.927590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.927595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.927599 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927604 | orchestrator | 2025-03-27 01:01:55.927609 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.927614 | orchestrator | Thursday 27 March 2025 00:56:01 +0000 (0:00:00.473) 0:08:10.395 ******** 2025-03-27 01:01:55.927619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.927624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.927629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.927633 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927638 | orchestrator | 2025-03-27 01:01:55.927644 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.927648 | orchestrator | Thursday 27 March 2025 00:56:02 +0000 (0:00:00.535) 0:08:10.930 ******** 2025-03-27 01:01:55.927653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.927658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.927663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.927668 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927673 | orchestrator | 2025-03-27 01:01:55.927677 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.927682 | orchestrator | Thursday 27 March 2025 00:56:03 +0000 (0:00:00.797) 0:08:11.727 ******** 2025-03-27 01:01:55.927687 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927692 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927697 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927702 | orchestrator | 2025-03-27 01:01:55.927707 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.927712 | orchestrator | Thursday 27 March 2025 00:56:03 +0000 (0:00:00.644) 0:08:12.372 ******** 2025-03-27 01:01:55.927716 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.927721 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927726 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.927731 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927736 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.927740 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927745 | orchestrator | 2025-03-27 01:01:55.927750 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.927755 | orchestrator | Thursday 27 March 2025 00:56:04 +0000 (0:00:00.498) 0:08:12.871 ******** 2025-03-27 01:01:55.927760 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927765 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927770 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927774 | orchestrator | 2025-03-27 01:01:55.927779 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.927796 | orchestrator | Thursday 27 March 2025 00:56:04 +0000 (0:00:00.368) 0:08:13.239 ******** 2025-03-27 01:01:55.927802 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927807 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927811 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927816 | orchestrator | 2025-03-27 01:01:55.927821 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.927826 | orchestrator | Thursday 27 March 2025 00:56:05 +0000 (0:00:00.416) 0:08:13.656 ******** 2025-03-27 01:01:55.927834 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.927839 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927843 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.927848 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927853 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.927858 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927863 | orchestrator | 2025-03-27 01:01:55.927868 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.927873 | orchestrator | Thursday 27 March 2025 00:56:06 +0000 (0:00:00.889) 0:08:14.546 ******** 2025-03-27 01:01:55.927877 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.927882 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927887 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.927892 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927897 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.927902 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927907 | orchestrator | 2025-03-27 01:01:55.927911 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.927916 | orchestrator | Thursday 27 March 2025 00:56:06 +0000 (0:00:00.379) 0:08:14.925 ******** 2025-03-27 01:01:55.927921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.927926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.927931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.927936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.927941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.927945 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.927955 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.927960 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.927965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.927969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.927974 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.927979 | orchestrator | 2025-03-27 01:01:55.927984 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.927989 | orchestrator | Thursday 27 March 2025 00:56:07 +0000 (0:00:00.663) 0:08:15.589 ******** 2025-03-27 01:01:55.927993 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.927998 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928003 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928008 | orchestrator | 2025-03-27 01:01:55.928013 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.928018 | orchestrator | Thursday 27 March 2025 00:56:08 +0000 (0:00:00.964) 0:08:16.553 ******** 2025-03-27 01:01:55.928022 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.928031 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928035 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.928040 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928045 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.928050 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928055 | orchestrator | 2025-03-27 01:01:55.928060 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.928065 | orchestrator | Thursday 27 March 2025 00:56:08 +0000 (0:00:00.573) 0:08:17.126 ******** 2025-03-27 01:01:55.928072 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928080 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928085 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928090 | orchestrator | 2025-03-27 01:01:55.928095 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.928100 | orchestrator | Thursday 27 March 2025 00:56:09 +0000 (0:00:00.977) 0:08:18.104 ******** 2025-03-27 01:01:55.928104 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928109 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928114 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928119 | orchestrator | 2025-03-27 01:01:55.928123 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-03-27 01:01:55.928128 | orchestrator | Thursday 27 March 2025 00:56:10 +0000 (0:00:00.749) 0:08:18.853 ******** 2025-03-27 01:01:55.928133 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.928138 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.928143 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.928147 | orchestrator | 2025-03-27 01:01:55.928155 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-03-27 01:01:55.928160 | orchestrator | Thursday 27 March 2025 00:56:11 +0000 (0:00:00.714) 0:08:19.568 ******** 2025-03-27 01:01:55.928164 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-27 01:01:55.928169 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:01:55.928174 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:01:55.928179 | orchestrator | 2025-03-27 01:01:55.928195 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-03-27 01:01:55.928200 | orchestrator | Thursday 27 March 2025 00:56:11 +0000 (0:00:00.757) 0:08:20.325 ******** 2025-03-27 01:01:55.928205 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.928210 | orchestrator | 2025-03-27 01:01:55.928215 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-03-27 01:01:55.928220 | orchestrator | Thursday 27 March 2025 00:56:12 +0000 (0:00:00.627) 0:08:20.952 ******** 2025-03-27 01:01:55.928225 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928229 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928234 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928239 | orchestrator | 2025-03-27 01:01:55.928244 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-03-27 01:01:55.928249 | orchestrator | Thursday 27 March 2025 00:56:13 +0000 (0:00:00.652) 0:08:21.604 ******** 2025-03-27 01:01:55.928254 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928258 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928263 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928268 | orchestrator | 2025-03-27 01:01:55.928273 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-03-27 01:01:55.928278 | orchestrator | Thursday 27 March 2025 00:56:13 +0000 (0:00:00.358) 0:08:21.962 ******** 2025-03-27 01:01:55.928283 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928287 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928292 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928297 | orchestrator | 2025-03-27 01:01:55.928302 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-03-27 01:01:55.928307 | orchestrator | Thursday 27 March 2025 00:56:13 +0000 (0:00:00.362) 0:08:22.325 ******** 2025-03-27 01:01:55.928311 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928316 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928321 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928326 | orchestrator | 2025-03-27 01:01:55.928331 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-03-27 01:01:55.928336 | orchestrator | Thursday 27 March 2025 00:56:14 +0000 (0:00:00.348) 0:08:22.673 ******** 2025-03-27 01:01:55.928344 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.928349 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.928354 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.928358 | orchestrator | 2025-03-27 01:01:55.928363 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-03-27 01:01:55.928368 | orchestrator | Thursday 27 March 2025 00:56:15 +0000 (0:00:00.917) 0:08:23.591 ******** 2025-03-27 01:01:55.928373 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.928378 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.928383 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.928387 | orchestrator | 2025-03-27 01:01:55.928392 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-03-27 01:01:55.928397 | orchestrator | Thursday 27 March 2025 00:56:15 +0000 (0:00:00.480) 0:08:24.072 ******** 2025-03-27 01:01:55.928402 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-27 01:01:55.928410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-27 01:01:55.928414 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-27 01:01:55.928419 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-27 01:01:55.928424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-27 01:01:55.928429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-03-27 01:01:55.928434 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-27 01:01:55.928439 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-27 01:01:55.928472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-03-27 01:01:55.928477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-27 01:01:55.928482 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-03-27 01:01:55.928487 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-27 01:01:55.928491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-03-27 01:01:55.928496 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-27 01:01:55.928501 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-03-27 01:01:55.928506 | orchestrator | 2025-03-27 01:01:55.928511 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-03-27 01:01:55.928515 | orchestrator | Thursday 27 March 2025 00:56:20 +0000 (0:00:04.371) 0:08:28.443 ******** 2025-03-27 01:01:55.928520 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928525 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928530 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928535 | orchestrator | 2025-03-27 01:01:55.928542 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-03-27 01:01:55.928547 | orchestrator | Thursday 27 March 2025 00:56:20 +0000 (0:00:00.607) 0:08:29.050 ******** 2025-03-27 01:01:55.928552 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.928556 | orchestrator | 2025-03-27 01:01:55.928561 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-03-27 01:01:55.928579 | orchestrator | Thursday 27 March 2025 00:56:21 +0000 (0:00:00.546) 0:08:29.597 ******** 2025-03-27 01:01:55.928585 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-27 01:01:55.928590 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-27 01:01:55.928598 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-03-27 01:01:55.928603 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-03-27 01:01:55.928608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-03-27 01:01:55.928612 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-03-27 01:01:55.928617 | orchestrator | 2025-03-27 01:01:55.928622 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-03-27 01:01:55.928627 | orchestrator | Thursday 27 March 2025 00:56:22 +0000 (0:00:01.102) 0:08:30.699 ******** 2025-03-27 01:01:55.928632 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:01:55.928636 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.928641 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-27 01:01:55.928646 | orchestrator | 2025-03-27 01:01:55.928651 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-03-27 01:01:55.928656 | orchestrator | Thursday 27 March 2025 00:56:24 +0000 (0:00:02.211) 0:08:32.911 ******** 2025-03-27 01:01:55.928660 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 01:01:55.928665 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.928670 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.928677 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 01:01:55.928682 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.928687 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.928692 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 01:01:55.928696 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.928701 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.928706 | orchestrator | 2025-03-27 01:01:55.928711 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-03-27 01:01:55.928715 | orchestrator | Thursday 27 March 2025 00:56:25 +0000 (0:00:01.346) 0:08:34.258 ******** 2025-03-27 01:01:55.928720 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.928725 | orchestrator | 2025-03-27 01:01:55.928730 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-03-27 01:01:55.928735 | orchestrator | Thursday 27 March 2025 00:56:28 +0000 (0:00:02.648) 0:08:36.906 ******** 2025-03-27 01:01:55.928739 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.928744 | orchestrator | 2025-03-27 01:01:55.928749 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-03-27 01:01:55.928754 | orchestrator | Thursday 27 March 2025 00:56:29 +0000 (0:00:00.872) 0:08:37.779 ******** 2025-03-27 01:01:55.928759 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928764 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928769 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928773 | orchestrator | 2025-03-27 01:01:55.928778 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-03-27 01:01:55.928783 | orchestrator | Thursday 27 March 2025 00:56:29 +0000 (0:00:00.371) 0:08:38.150 ******** 2025-03-27 01:01:55.928788 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928793 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928797 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928802 | orchestrator | 2025-03-27 01:01:55.928807 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-03-27 01:01:55.928812 | orchestrator | Thursday 27 March 2025 00:56:30 +0000 (0:00:00.381) 0:08:38.531 ******** 2025-03-27 01:01:55.928817 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928821 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928826 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928833 | orchestrator | 2025-03-27 01:01:55.928838 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-03-27 01:01:55.928846 | orchestrator | Thursday 27 March 2025 00:56:30 +0000 (0:00:00.357) 0:08:38.889 ******** 2025-03-27 01:01:55.928851 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.928856 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.928861 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.928865 | orchestrator | 2025-03-27 01:01:55.928870 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-03-27 01:01:55.928875 | orchestrator | Thursday 27 March 2025 00:56:31 +0000 (0:00:00.716) 0:08:39.606 ******** 2025-03-27 01:01:55.928880 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.928885 | orchestrator | 2025-03-27 01:01:55.928889 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-03-27 01:01:55.928894 | orchestrator | Thursday 27 March 2025 00:56:31 +0000 (0:00:00.638) 0:08:40.245 ******** 2025-03-27 01:01:55.928899 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5e2bf155-ac50-562d-a3fc-a4d9038fe730', 'data_vg': 'ceph-5e2bf155-ac50-562d-a3fc-a4d9038fe730'}) 2025-03-27 01:01:55.928905 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-923c5540-3b69-54d6-b090-bccde0d698f1', 'data_vg': 'ceph-923c5540-3b69-54d6-b090-bccde0d698f1'}) 2025-03-27 01:01:55.928909 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bac76156-9f65-5e37-8447-16c40269f5cf', 'data_vg': 'ceph-bac76156-9f65-5e37-8447-16c40269f5cf'}) 2025-03-27 01:01:55.928925 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d321ea45-1a00-5698-8092-45c793cb3b8c', 'data_vg': 'ceph-d321ea45-1a00-5698-8092-45c793cb3b8c'}) 2025-03-27 01:01:55.928931 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8acd0346-cc61-560a-be8a-825f05553edd', 'data_vg': 'ceph-8acd0346-cc61-560a-be8a-825f05553edd'}) 2025-03-27 01:01:55.928936 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b', 'data_vg': 'ceph-cb3edc0f-ef8f-5bb1-94d3-58e33ab1473b'}) 2025-03-27 01:01:55.928941 | orchestrator | 2025-03-27 01:01:55.928946 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-03-27 01:01:55.928950 | orchestrator | Thursday 27 March 2025 00:57:12 +0000 (0:00:40.764) 0:09:21.009 ******** 2025-03-27 01:01:55.928955 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.928960 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.928965 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.928970 | orchestrator | 2025-03-27 01:01:55.928974 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-03-27 01:01:55.928979 | orchestrator | Thursday 27 March 2025 00:57:13 +0000 (0:00:00.513) 0:09:21.523 ******** 2025-03-27 01:01:55.928984 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.928989 | orchestrator | 2025-03-27 01:01:55.928994 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-03-27 01:01:55.928998 | orchestrator | Thursday 27 March 2025 00:57:13 +0000 (0:00:00.659) 0:09:22.182 ******** 2025-03-27 01:01:55.929003 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.929008 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.929013 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.929020 | orchestrator | 2025-03-27 01:01:55.929025 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-03-27 01:01:55.929030 | orchestrator | Thursday 27 March 2025 00:57:14 +0000 (0:00:00.736) 0:09:22.919 ******** 2025-03-27 01:01:55.929034 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929039 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929044 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929049 | orchestrator | 2025-03-27 01:01:55.929053 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-03-27 01:01:55.929058 | orchestrator | Thursday 27 March 2025 00:57:16 +0000 (0:00:02.210) 0:09:25.130 ******** 2025-03-27 01:01:55.929066 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.929071 | orchestrator | 2025-03-27 01:01:55.929076 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-03-27 01:01:55.929084 | orchestrator | Thursday 27 March 2025 00:57:17 +0000 (0:00:00.594) 0:09:25.724 ******** 2025-03-27 01:01:55.929089 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929094 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929099 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929103 | orchestrator | 2025-03-27 01:01:55.929108 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-03-27 01:01:55.929113 | orchestrator | Thursday 27 March 2025 00:57:19 +0000 (0:00:01.755) 0:09:27.479 ******** 2025-03-27 01:01:55.929118 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929123 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929127 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929132 | orchestrator | 2025-03-27 01:01:55.929137 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-03-27 01:01:55.929142 | orchestrator | Thursday 27 March 2025 00:57:20 +0000 (0:00:01.238) 0:09:28.718 ******** 2025-03-27 01:01:55.929146 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929151 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929156 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929161 | orchestrator | 2025-03-27 01:01:55.929166 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-03-27 01:01:55.929170 | orchestrator | Thursday 27 March 2025 00:57:22 +0000 (0:00:01.858) 0:09:30.576 ******** 2025-03-27 01:01:55.929175 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929180 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929185 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929189 | orchestrator | 2025-03-27 01:01:55.929194 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-03-27 01:01:55.929199 | orchestrator | Thursday 27 March 2025 00:57:22 +0000 (0:00:00.379) 0:09:30.956 ******** 2025-03-27 01:01:55.929204 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929208 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929213 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929218 | orchestrator | 2025-03-27 01:01:55.929223 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-03-27 01:01:55.929227 | orchestrator | Thursday 27 March 2025 00:57:23 +0000 (0:00:00.634) 0:09:31.591 ******** 2025-03-27 01:01:55.929232 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-27 01:01:55.929237 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-03-27 01:01:55.929242 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-03-27 01:01:55.929246 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-03-27 01:01:55.929251 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-03-27 01:01:55.929256 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-03-27 01:01:55.929261 | orchestrator | 2025-03-27 01:01:55.929265 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-03-27 01:01:55.929270 | orchestrator | Thursday 27 March 2025 00:57:24 +0000 (0:00:01.136) 0:09:32.728 ******** 2025-03-27 01:01:55.929275 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-03-27 01:01:55.929280 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-03-27 01:01:55.929285 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-03-27 01:01:55.929289 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-03-27 01:01:55.929294 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-03-27 01:01:55.929310 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-03-27 01:01:55.929315 | orchestrator | 2025-03-27 01:01:55.929320 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-03-27 01:01:55.929325 | orchestrator | Thursday 27 March 2025 00:57:27 +0000 (0:00:03.624) 0:09:36.352 ******** 2025-03-27 01:01:55.929333 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929338 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.929347 | orchestrator | 2025-03-27 01:01:55.929352 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-03-27 01:01:55.929357 | orchestrator | Thursday 27 March 2025 00:57:30 +0000 (0:00:02.587) 0:09:38.939 ******** 2025-03-27 01:01:55.929362 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929366 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929371 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-03-27 01:01:55.929376 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.929381 | orchestrator | 2025-03-27 01:01:55.929386 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-03-27 01:01:55.929390 | orchestrator | Thursday 27 March 2025 00:57:43 +0000 (0:00:12.790) 0:09:51.730 ******** 2025-03-27 01:01:55.929395 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929400 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929405 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929410 | orchestrator | 2025-03-27 01:01:55.929414 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-03-27 01:01:55.929419 | orchestrator | Thursday 27 March 2025 00:57:43 +0000 (0:00:00.507) 0:09:52.237 ******** 2025-03-27 01:01:55.929424 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929429 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929434 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929438 | orchestrator | 2025-03-27 01:01:55.929454 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.929459 | orchestrator | Thursday 27 March 2025 00:57:45 +0000 (0:00:01.229) 0:09:53.466 ******** 2025-03-27 01:01:55.929464 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929469 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929474 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929478 | orchestrator | 2025-03-27 01:01:55.929483 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-03-27 01:01:55.929488 | orchestrator | Thursday 27 March 2025 00:57:45 +0000 (0:00:00.726) 0:09:54.193 ******** 2025-03-27 01:01:55.929493 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.929498 | orchestrator | 2025-03-27 01:01:55.929502 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-03-27 01:01:55.929507 | orchestrator | Thursday 27 March 2025 00:57:46 +0000 (0:00:00.870) 0:09:55.064 ******** 2025-03-27 01:01:55.929512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.929517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.929522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.929527 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929531 | orchestrator | 2025-03-27 01:01:55.929536 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-03-27 01:01:55.929541 | orchestrator | Thursday 27 March 2025 00:57:47 +0000 (0:00:00.456) 0:09:55.520 ******** 2025-03-27 01:01:55.929546 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929550 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929555 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929560 | orchestrator | 2025-03-27 01:01:55.929565 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-03-27 01:01:55.929569 | orchestrator | Thursday 27 March 2025 00:57:47 +0000 (0:00:00.396) 0:09:55.916 ******** 2025-03-27 01:01:55.929574 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929579 | orchestrator | 2025-03-27 01:01:55.929584 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-03-27 01:01:55.929594 | orchestrator | Thursday 27 March 2025 00:57:47 +0000 (0:00:00.344) 0:09:56.261 ******** 2025-03-27 01:01:55.929599 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929604 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929608 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929615 | orchestrator | 2025-03-27 01:01:55.929620 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-03-27 01:01:55.929625 | orchestrator | Thursday 27 March 2025 00:57:48 +0000 (0:00:00.692) 0:09:56.954 ******** 2025-03-27 01:01:55.929630 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929635 | orchestrator | 2025-03-27 01:01:55.929640 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-03-27 01:01:55.929644 | orchestrator | Thursday 27 March 2025 00:57:48 +0000 (0:00:00.282) 0:09:57.236 ******** 2025-03-27 01:01:55.929649 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929654 | orchestrator | 2025-03-27 01:01:55.929659 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-03-27 01:01:55.929664 | orchestrator | Thursday 27 March 2025 00:57:49 +0000 (0:00:00.255) 0:09:57.492 ******** 2025-03-27 01:01:55.929668 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929673 | orchestrator | 2025-03-27 01:01:55.929678 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-03-27 01:01:55.929683 | orchestrator | Thursday 27 March 2025 00:57:49 +0000 (0:00:00.197) 0:09:57.689 ******** 2025-03-27 01:01:55.929687 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929692 | orchestrator | 2025-03-27 01:01:55.929697 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-03-27 01:01:55.929702 | orchestrator | Thursday 27 March 2025 00:57:49 +0000 (0:00:00.354) 0:09:58.043 ******** 2025-03-27 01:01:55.929706 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929711 | orchestrator | 2025-03-27 01:01:55.929716 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-03-27 01:01:55.929732 | orchestrator | Thursday 27 March 2025 00:57:49 +0000 (0:00:00.257) 0:09:58.301 ******** 2025-03-27 01:01:55.929738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.929743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.929747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.929752 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929760 | orchestrator | 2025-03-27 01:01:55.929765 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-03-27 01:01:55.929770 | orchestrator | Thursday 27 March 2025 00:57:50 +0000 (0:00:00.450) 0:09:58.751 ******** 2025-03-27 01:01:55.929774 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929779 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929784 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929789 | orchestrator | 2025-03-27 01:01:55.929794 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-03-27 01:01:55.929799 | orchestrator | Thursday 27 March 2025 00:57:50 +0000 (0:00:00.348) 0:09:59.100 ******** 2025-03-27 01:01:55.929803 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929808 | orchestrator | 2025-03-27 01:01:55.929813 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-03-27 01:01:55.929818 | orchestrator | Thursday 27 March 2025 00:57:51 +0000 (0:00:00.919) 0:10:00.020 ******** 2025-03-27 01:01:55.929822 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929827 | orchestrator | 2025-03-27 01:01:55.929832 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.929837 | orchestrator | Thursday 27 March 2025 00:57:51 +0000 (0:00:00.256) 0:10:00.276 ******** 2025-03-27 01:01:55.929841 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.929846 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.929851 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.929856 | orchestrator | 2025-03-27 01:01:55.929864 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-03-27 01:01:55.929869 | orchestrator | 2025-03-27 01:01:55.929874 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.929879 | orchestrator | Thursday 27 March 2025 00:57:54 +0000 (0:00:03.068) 0:10:03.345 ******** 2025-03-27 01:01:55.929883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.929889 | orchestrator | 2025-03-27 01:01:55.929893 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.929898 | orchestrator | Thursday 27 March 2025 00:57:56 +0000 (0:00:01.362) 0:10:04.707 ******** 2025-03-27 01:01:55.929903 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.929908 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.929913 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.929917 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.929922 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.929927 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.929932 | orchestrator | 2025-03-27 01:01:55.929937 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.929941 | orchestrator | Thursday 27 March 2025 00:57:57 +0000 (0:00:00.792) 0:10:05.500 ******** 2025-03-27 01:01:55.929946 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.929951 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.929956 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.929960 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.929965 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.929970 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.929975 | orchestrator | 2025-03-27 01:01:55.929979 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.929984 | orchestrator | Thursday 27 March 2025 00:57:58 +0000 (0:00:01.410) 0:10:06.911 ******** 2025-03-27 01:01:55.929989 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.929994 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.929999 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930004 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930008 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930027 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930032 | orchestrator | 2025-03-27 01:01:55.930037 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.930044 | orchestrator | Thursday 27 March 2025 00:57:59 +0000 (0:00:01.336) 0:10:08.247 ******** 2025-03-27 01:01:55.930049 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930054 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930059 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930064 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930069 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930073 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930078 | orchestrator | 2025-03-27 01:01:55.930083 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.930088 | orchestrator | Thursday 27 March 2025 00:58:01 +0000 (0:00:01.263) 0:10:09.511 ******** 2025-03-27 01:01:55.930092 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930097 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.930102 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930107 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930112 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.930116 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.930121 | orchestrator | 2025-03-27 01:01:55.930126 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.930131 | orchestrator | Thursday 27 March 2025 00:58:02 +0000 (0:00:01.064) 0:10:10.576 ******** 2025-03-27 01:01:55.930135 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930140 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930148 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930153 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930157 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930162 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930167 | orchestrator | 2025-03-27 01:01:55.930172 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.930189 | orchestrator | Thursday 27 March 2025 00:58:02 +0000 (0:00:00.737) 0:10:11.313 ******** 2025-03-27 01:01:55.930195 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930199 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930204 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930209 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930214 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930219 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930223 | orchestrator | 2025-03-27 01:01:55.930228 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.930233 | orchestrator | Thursday 27 March 2025 00:58:03 +0000 (0:00:00.934) 0:10:12.248 ******** 2025-03-27 01:01:55.930238 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930245 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930250 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930255 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930260 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930265 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930270 | orchestrator | 2025-03-27 01:01:55.930274 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.930279 | orchestrator | Thursday 27 March 2025 00:58:04 +0000 (0:00:00.692) 0:10:12.940 ******** 2025-03-27 01:01:55.930284 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930289 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930294 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930298 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930303 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930308 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930313 | orchestrator | 2025-03-27 01:01:55.930318 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.930322 | orchestrator | Thursday 27 March 2025 00:58:05 +0000 (0:00:00.940) 0:10:13.881 ******** 2025-03-27 01:01:55.930327 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930332 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930337 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930342 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930346 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930351 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930356 | orchestrator | 2025-03-27 01:01:55.930361 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.930366 | orchestrator | Thursday 27 March 2025 00:58:06 +0000 (0:00:00.787) 0:10:14.668 ******** 2025-03-27 01:01:55.930370 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.930375 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.930380 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.930385 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930389 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930394 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930399 | orchestrator | 2025-03-27 01:01:55.930404 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.930408 | orchestrator | Thursday 27 March 2025 00:58:07 +0000 (0:00:01.204) 0:10:15.873 ******** 2025-03-27 01:01:55.930413 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930418 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930423 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930428 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930432 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930452 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930457 | orchestrator | 2025-03-27 01:01:55.930462 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.930467 | orchestrator | Thursday 27 March 2025 00:58:08 +0000 (0:00:00.709) 0:10:16.583 ******** 2025-03-27 01:01:55.930472 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.930477 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.930481 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.930486 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930491 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930496 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930501 | orchestrator | 2025-03-27 01:01:55.930506 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.930511 | orchestrator | Thursday 27 March 2025 00:58:09 +0000 (0:00:00.900) 0:10:17.484 ******** 2025-03-27 01:01:55.930515 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930520 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930525 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930530 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930535 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930539 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930544 | orchestrator | 2025-03-27 01:01:55.930549 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.930554 | orchestrator | Thursday 27 March 2025 00:58:09 +0000 (0:00:00.727) 0:10:18.211 ******** 2025-03-27 01:01:55.930559 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930564 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930568 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930573 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930578 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930583 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930587 | orchestrator | 2025-03-27 01:01:55.930592 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.930597 | orchestrator | Thursday 27 March 2025 00:58:10 +0000 (0:00:00.985) 0:10:19.197 ******** 2025-03-27 01:01:55.930602 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930607 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930612 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930616 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930621 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930629 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930634 | orchestrator | 2025-03-27 01:01:55.930639 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.930644 | orchestrator | Thursday 27 March 2025 00:58:11 +0000 (0:00:00.687) 0:10:19.885 ******** 2025-03-27 01:01:55.930649 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930654 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930659 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930664 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930669 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930673 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930678 | orchestrator | 2025-03-27 01:01:55.930695 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.930700 | orchestrator | Thursday 27 March 2025 00:58:12 +0000 (0:00:00.947) 0:10:20.832 ******** 2025-03-27 01:01:55.930705 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930710 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930715 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930720 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930725 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930730 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930734 | orchestrator | 2025-03-27 01:01:55.930739 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.930744 | orchestrator | Thursday 27 March 2025 00:58:13 +0000 (0:00:00.742) 0:10:21.575 ******** 2025-03-27 01:01:55.930752 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.930757 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.930762 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.930767 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930771 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930776 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930781 | orchestrator | 2025-03-27 01:01:55.930786 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.930793 | orchestrator | Thursday 27 March 2025 00:58:14 +0000 (0:00:00.996) 0:10:22.571 ******** 2025-03-27 01:01:55.930798 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.930803 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.930808 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.930812 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.930817 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.930822 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.930827 | orchestrator | 2025-03-27 01:01:55.930831 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.930836 | orchestrator | Thursday 27 March 2025 00:58:14 +0000 (0:00:00.728) 0:10:23.300 ******** 2025-03-27 01:01:55.930841 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930846 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930850 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930855 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930860 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930865 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930870 | orchestrator | 2025-03-27 01:01:55.930874 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.930879 | orchestrator | Thursday 27 March 2025 00:58:15 +0000 (0:00:00.949) 0:10:24.249 ******** 2025-03-27 01:01:55.930884 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930889 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930894 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930898 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930903 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930908 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930913 | orchestrator | 2025-03-27 01:01:55.930918 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.930923 | orchestrator | Thursday 27 March 2025 00:58:16 +0000 (0:00:00.764) 0:10:25.014 ******** 2025-03-27 01:01:55.930927 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930932 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930937 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930942 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930946 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930951 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930956 | orchestrator | 2025-03-27 01:01:55.930961 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.930966 | orchestrator | Thursday 27 March 2025 00:58:17 +0000 (0:00:01.003) 0:10:26.018 ******** 2025-03-27 01:01:55.930970 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.930975 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.930980 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.930985 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.930990 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.930994 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.930999 | orchestrator | 2025-03-27 01:01:55.931004 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.931009 | orchestrator | Thursday 27 March 2025 00:58:18 +0000 (0:00:00.678) 0:10:26.696 ******** 2025-03-27 01:01:55.931013 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931021 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931025 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931034 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931039 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931043 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931048 | orchestrator | 2025-03-27 01:01:55.931053 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.931058 | orchestrator | Thursday 27 March 2025 00:58:19 +0000 (0:00:00.961) 0:10:27.658 ******** 2025-03-27 01:01:55.931063 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931067 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931072 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931077 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931082 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931087 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931092 | orchestrator | 2025-03-27 01:01:55.931096 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.931101 | orchestrator | Thursday 27 March 2025 00:58:19 +0000 (0:00:00.704) 0:10:28.363 ******** 2025-03-27 01:01:55.931106 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931110 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931115 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931120 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931125 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931130 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931135 | orchestrator | 2025-03-27 01:01:55.931139 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.931144 | orchestrator | Thursday 27 March 2025 00:58:20 +0000 (0:00:00.947) 0:10:29.310 ******** 2025-03-27 01:01:55.931149 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931154 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931159 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931175 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931180 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931185 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931190 | orchestrator | 2025-03-27 01:01:55.931195 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.931200 | orchestrator | Thursday 27 March 2025 00:58:21 +0000 (0:00:00.770) 0:10:30.081 ******** 2025-03-27 01:01:55.931205 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931210 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931214 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931219 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931224 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931229 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931234 | orchestrator | 2025-03-27 01:01:55.931239 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.931243 | orchestrator | Thursday 27 March 2025 00:58:22 +0000 (0:00:01.042) 0:10:31.123 ******** 2025-03-27 01:01:55.931248 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931253 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931258 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931263 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931268 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931272 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931277 | orchestrator | 2025-03-27 01:01:55.931282 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.931287 | orchestrator | Thursday 27 March 2025 00:58:23 +0000 (0:00:00.704) 0:10:31.827 ******** 2025-03-27 01:01:55.931295 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931300 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931305 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931310 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931318 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931323 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931328 | orchestrator | 2025-03-27 01:01:55.931333 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.931338 | orchestrator | Thursday 27 March 2025 00:58:24 +0000 (0:00:00.952) 0:10:32.780 ******** 2025-03-27 01:01:55.931343 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931347 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931352 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931357 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931362 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931367 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931372 | orchestrator | 2025-03-27 01:01:55.931376 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.931381 | orchestrator | Thursday 27 March 2025 00:58:25 +0000 (0:00:00.712) 0:10:33.493 ******** 2025-03-27 01:01:55.931386 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.931391 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-03-27 01:01:55.931396 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931400 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.931405 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-03-27 01:01:55.931410 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931415 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.931420 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-03-27 01:01:55.931425 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931430 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.931435 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.931467 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931476 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.931481 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.931486 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931490 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.931495 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.931502 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931507 | orchestrator | 2025-03-27 01:01:55.931512 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.931517 | orchestrator | Thursday 27 March 2025 00:58:26 +0000 (0:00:01.091) 0:10:34.584 ******** 2025-03-27 01:01:55.931522 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-03-27 01:01:55.931529 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-03-27 01:01:55.931534 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931538 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-03-27 01:01:55.931543 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-03-27 01:01:55.931548 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931553 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-03-27 01:01:55.931557 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-03-27 01:01:55.931562 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931567 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-27 01:01:55.931572 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-27 01:01:55.931577 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931581 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-27 01:01:55.931586 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-27 01:01:55.931591 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931596 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-27 01:01:55.931600 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-27 01:01:55.931608 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931613 | orchestrator | 2025-03-27 01:01:55.931618 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.931635 | orchestrator | Thursday 27 March 2025 00:58:26 +0000 (0:00:00.809) 0:10:35.394 ******** 2025-03-27 01:01:55.931641 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931646 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931650 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931655 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931660 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931665 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931669 | orchestrator | 2025-03-27 01:01:55.931674 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.931679 | orchestrator | Thursday 27 March 2025 00:58:27 +0000 (0:00:00.993) 0:10:36.387 ******** 2025-03-27 01:01:55.931684 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931688 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931693 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931698 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931703 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931708 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931712 | orchestrator | 2025-03-27 01:01:55.931717 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.931722 | orchestrator | Thursday 27 March 2025 00:58:28 +0000 (0:00:00.712) 0:10:37.100 ******** 2025-03-27 01:01:55.931727 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931731 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931736 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931741 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931746 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931750 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931755 | orchestrator | 2025-03-27 01:01:55.931760 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.931764 | orchestrator | Thursday 27 March 2025 00:58:29 +0000 (0:00:00.951) 0:10:38.051 ******** 2025-03-27 01:01:55.931769 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931774 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931779 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931783 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931788 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931793 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931797 | orchestrator | 2025-03-27 01:01:55.931805 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.931810 | orchestrator | Thursday 27 March 2025 00:58:30 +0000 (0:00:00.707) 0:10:38.759 ******** 2025-03-27 01:01:55.931814 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931819 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931824 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931829 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931834 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931838 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931843 | orchestrator | 2025-03-27 01:01:55.931848 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.931852 | orchestrator | Thursday 27 March 2025 00:58:31 +0000 (0:00:01.039) 0:10:39.798 ******** 2025-03-27 01:01:55.931857 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931862 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.931866 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.931871 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.931876 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.931881 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.931888 | orchestrator | 2025-03-27 01:01:55.931893 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.931898 | orchestrator | Thursday 27 March 2025 00:58:32 +0000 (0:00:00.769) 0:10:40.568 ******** 2025-03-27 01:01:55.931903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.931908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.931912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.931917 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931922 | orchestrator | 2025-03-27 01:01:55.931927 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.931931 | orchestrator | Thursday 27 March 2025 00:58:32 +0000 (0:00:00.611) 0:10:41.180 ******** 2025-03-27 01:01:55.931936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.931941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.931946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.931950 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931955 | orchestrator | 2025-03-27 01:01:55.931960 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.931965 | orchestrator | Thursday 27 March 2025 00:58:33 +0000 (0:00:00.791) 0:10:41.972 ******** 2025-03-27 01:01:55.931969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.931974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.931979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.931983 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.931988 | orchestrator | 2025-03-27 01:01:55.931993 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.931998 | orchestrator | Thursday 27 March 2025 00:58:34 +0000 (0:00:01.013) 0:10:42.985 ******** 2025-03-27 01:01:55.932002 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932007 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932012 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932017 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932024 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932029 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932034 | orchestrator | 2025-03-27 01:01:55.932038 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.932043 | orchestrator | Thursday 27 March 2025 00:58:35 +0000 (0:00:00.721) 0:10:43.706 ******** 2025-03-27 01:01:55.932048 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.932053 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932069 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.932075 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932080 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.932085 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932089 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.932094 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932099 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.932104 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932108 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.932113 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932118 | orchestrator | 2025-03-27 01:01:55.932123 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.932127 | orchestrator | Thursday 27 March 2025 00:58:36 +0000 (0:00:01.436) 0:10:45.142 ******** 2025-03-27 01:01:55.932132 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932137 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932142 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932146 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932154 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932159 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932164 | orchestrator | 2025-03-27 01:01:55.932168 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.932173 | orchestrator | Thursday 27 March 2025 00:58:37 +0000 (0:00:00.738) 0:10:45.881 ******** 2025-03-27 01:01:55.932178 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932183 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932187 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932192 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932197 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932201 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932206 | orchestrator | 2025-03-27 01:01:55.932211 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.932216 | orchestrator | Thursday 27 March 2025 00:58:38 +0000 (0:00:00.955) 0:10:46.837 ******** 2025-03-27 01:01:55.932221 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-03-27 01:01:55.932225 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932230 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-03-27 01:01:55.932235 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932240 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-03-27 01:01:55.932245 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932249 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.932254 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932259 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.932264 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932268 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.932273 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932278 | orchestrator | 2025-03-27 01:01:55.932283 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.932287 | orchestrator | Thursday 27 March 2025 00:58:39 +0000 (0:00:01.007) 0:10:47.844 ******** 2025-03-27 01:01:55.932292 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932297 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932302 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932307 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.932311 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932316 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.932321 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932326 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.932331 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932336 | orchestrator | 2025-03-27 01:01:55.932340 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.932345 | orchestrator | Thursday 27 March 2025 00:58:40 +0000 (0:00:00.990) 0:10:48.835 ******** 2025-03-27 01:01:55.932350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-27 01:01:55.932355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-27 01:01:55.932360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-27 01:01:55.932364 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-27 01:01:55.932374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-27 01:01:55.932379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-27 01:01:55.932383 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-27 01:01:55.932396 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-27 01:01:55.932401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-27 01:01:55.932405 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.932415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.932420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.932424 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932429 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.932434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.932439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.932454 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.932481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.932487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.932492 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932496 | orchestrator | 2025-03-27 01:01:55.932501 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.932506 | orchestrator | Thursday 27 March 2025 00:58:41 +0000 (0:00:01.528) 0:10:50.363 ******** 2025-03-27 01:01:55.932511 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932515 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932520 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932525 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932530 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932534 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932539 | orchestrator | 2025-03-27 01:01:55.932544 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.932548 | orchestrator | Thursday 27 March 2025 00:58:43 +0000 (0:00:01.418) 0:10:51.782 ******** 2025-03-27 01:01:55.932553 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932558 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932562 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932567 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.932572 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932577 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.932581 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932586 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.932591 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932596 | orchestrator | 2025-03-27 01:01:55.932600 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.932605 | orchestrator | Thursday 27 March 2025 00:58:44 +0000 (0:00:01.489) 0:10:53.272 ******** 2025-03-27 01:01:55.932610 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932615 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932622 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932627 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932632 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932637 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932642 | orchestrator | 2025-03-27 01:01:55.932646 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.932651 | orchestrator | Thursday 27 March 2025 00:58:46 +0000 (0:00:01.542) 0:10:54.814 ******** 2025-03-27 01:01:55.932656 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:01:55.932661 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:01:55.932665 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:01:55.932670 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.932675 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.932683 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.932687 | orchestrator | 2025-03-27 01:01:55.932694 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-03-27 01:01:55.932699 | orchestrator | Thursday 27 March 2025 00:58:47 +0000 (0:00:01.550) 0:10:56.364 ******** 2025-03-27 01:01:55.932704 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.932709 | orchestrator | 2025-03-27 01:01:55.932713 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-03-27 01:01:55.932718 | orchestrator | Thursday 27 March 2025 00:58:51 +0000 (0:00:03.450) 0:10:59.815 ******** 2025-03-27 01:01:55.932723 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.932728 | orchestrator | 2025-03-27 01:01:55.932732 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-03-27 01:01:55.932737 | orchestrator | Thursday 27 March 2025 00:58:53 +0000 (0:00:01.737) 0:11:01.552 ******** 2025-03-27 01:01:55.932742 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.932747 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.932752 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.932756 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.932761 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.932766 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.932770 | orchestrator | 2025-03-27 01:01:55.932775 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-03-27 01:01:55.932780 | orchestrator | Thursday 27 March 2025 00:58:55 +0000 (0:00:01.901) 0:11:03.454 ******** 2025-03-27 01:01:55.932785 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.932789 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.932794 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.932799 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.932803 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.932808 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.932813 | orchestrator | 2025-03-27 01:01:55.932818 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-03-27 01:01:55.932822 | orchestrator | Thursday 27 March 2025 00:58:56 +0000 (0:00:01.424) 0:11:04.879 ******** 2025-03-27 01:01:55.932827 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.932832 | orchestrator | 2025-03-27 01:01:55.932837 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-03-27 01:01:55.932842 | orchestrator | Thursday 27 March 2025 00:58:58 +0000 (0:00:01.757) 0:11:06.637 ******** 2025-03-27 01:01:55.932847 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.932851 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.932856 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.932861 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.932866 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.932870 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.932875 | orchestrator | 2025-03-27 01:01:55.932880 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-03-27 01:01:55.932885 | orchestrator | Thursday 27 March 2025 00:59:00 +0000 (0:00:02.306) 0:11:08.944 ******** 2025-03-27 01:01:55.932889 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.932894 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.932899 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.932904 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.932912 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.932917 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.932921 | orchestrator | 2025-03-27 01:01:55.932926 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-03-27 01:01:55.932931 | orchestrator | Thursday 27 March 2025 00:59:04 +0000 (0:00:04.420) 0:11:13.364 ******** 2025-03-27 01:01:55.932936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.932949 | orchestrator | 2025-03-27 01:01:55.932953 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-03-27 01:01:55.932958 | orchestrator | Thursday 27 March 2025 00:59:06 +0000 (0:00:01.805) 0:11:15.169 ******** 2025-03-27 01:01:55.932963 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.932968 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.932973 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.932977 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.932982 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.932987 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.932992 | orchestrator | 2025-03-27 01:01:55.932997 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-03-27 01:01:55.933001 | orchestrator | Thursday 27 March 2025 00:59:07 +0000 (0:00:00.834) 0:11:16.003 ******** 2025-03-27 01:01:55.933006 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:01:55.933011 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:01:55.933016 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:01:55.933020 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.933025 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.933030 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.933035 | orchestrator | 2025-03-27 01:01:55.933039 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-03-27 01:01:55.933044 | orchestrator | Thursday 27 March 2025 00:59:10 +0000 (0:00:02.719) 0:11:18.723 ******** 2025-03-27 01:01:55.933049 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:01:55.933054 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:01:55.933061 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:01:55.933066 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933070 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933075 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933080 | orchestrator | 2025-03-27 01:01:55.933085 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-03-27 01:01:55.933089 | orchestrator | 2025-03-27 01:01:55.933094 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.933099 | orchestrator | Thursday 27 March 2025 00:59:13 +0000 (0:00:02.773) 0:11:21.497 ******** 2025-03-27 01:01:55.933104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.933111 | orchestrator | 2025-03-27 01:01:55.933116 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.933121 | orchestrator | Thursday 27 March 2025 00:59:13 +0000 (0:00:00.860) 0:11:22.357 ******** 2025-03-27 01:01:55.933126 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933130 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933135 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933140 | orchestrator | 2025-03-27 01:01:55.933145 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.933150 | orchestrator | Thursday 27 March 2025 00:59:14 +0000 (0:00:00.343) 0:11:22.700 ******** 2025-03-27 01:01:55.933154 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933159 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933164 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933169 | orchestrator | 2025-03-27 01:01:55.933173 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.933178 | orchestrator | Thursday 27 March 2025 00:59:15 +0000 (0:00:00.793) 0:11:23.494 ******** 2025-03-27 01:01:55.933183 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933188 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933192 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933197 | orchestrator | 2025-03-27 01:01:55.933204 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.933209 | orchestrator | Thursday 27 March 2025 00:59:16 +0000 (0:00:01.277) 0:11:24.772 ******** 2025-03-27 01:01:55.933214 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933222 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933226 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933231 | orchestrator | 2025-03-27 01:01:55.933236 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.933241 | orchestrator | Thursday 27 March 2025 00:59:17 +0000 (0:00:00.801) 0:11:25.573 ******** 2025-03-27 01:01:55.933246 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933250 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933255 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933260 | orchestrator | 2025-03-27 01:01:55.933265 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.933269 | orchestrator | Thursday 27 March 2025 00:59:17 +0000 (0:00:00.378) 0:11:25.952 ******** 2025-03-27 01:01:55.933274 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933279 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933283 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933288 | orchestrator | 2025-03-27 01:01:55.933293 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.933298 | orchestrator | Thursday 27 March 2025 00:59:17 +0000 (0:00:00.356) 0:11:26.308 ******** 2025-03-27 01:01:55.933302 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933307 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933312 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933317 | orchestrator | 2025-03-27 01:01:55.933322 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.933326 | orchestrator | Thursday 27 March 2025 00:59:18 +0000 (0:00:00.669) 0:11:26.978 ******** 2025-03-27 01:01:55.933331 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933336 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933341 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933345 | orchestrator | 2025-03-27 01:01:55.933352 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.933357 | orchestrator | Thursday 27 March 2025 00:59:18 +0000 (0:00:00.385) 0:11:27.364 ******** 2025-03-27 01:01:55.933362 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933366 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933371 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933376 | orchestrator | 2025-03-27 01:01:55.933381 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.933385 | orchestrator | Thursday 27 March 2025 00:59:19 +0000 (0:00:00.395) 0:11:27.759 ******** 2025-03-27 01:01:55.933390 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933395 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933400 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933405 | orchestrator | 2025-03-27 01:01:55.933409 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.933414 | orchestrator | Thursday 27 March 2025 00:59:19 +0000 (0:00:00.375) 0:11:28.135 ******** 2025-03-27 01:01:55.933419 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933424 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933428 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933433 | orchestrator | 2025-03-27 01:01:55.933438 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.933453 | orchestrator | Thursday 27 March 2025 00:59:20 +0000 (0:00:01.085) 0:11:29.220 ******** 2025-03-27 01:01:55.933457 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933462 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933467 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933472 | orchestrator | 2025-03-27 01:01:55.933476 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.933481 | orchestrator | Thursday 27 March 2025 00:59:21 +0000 (0:00:00.412) 0:11:29.632 ******** 2025-03-27 01:01:55.933486 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933491 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933529 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933534 | orchestrator | 2025-03-27 01:01:55.933539 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.933544 | orchestrator | Thursday 27 March 2025 00:59:21 +0000 (0:00:00.428) 0:11:30.061 ******** 2025-03-27 01:01:55.933548 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933553 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933558 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933563 | orchestrator | 2025-03-27 01:01:55.933568 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.933572 | orchestrator | Thursday 27 March 2025 00:59:22 +0000 (0:00:00.395) 0:11:30.456 ******** 2025-03-27 01:01:55.933577 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933582 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933587 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933591 | orchestrator | 2025-03-27 01:01:55.933596 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.933601 | orchestrator | Thursday 27 March 2025 00:59:22 +0000 (0:00:00.852) 0:11:31.309 ******** 2025-03-27 01:01:55.933606 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933610 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933618 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933623 | orchestrator | 2025-03-27 01:01:55.933628 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.933633 | orchestrator | Thursday 27 March 2025 00:59:23 +0000 (0:00:00.521) 0:11:31.830 ******** 2025-03-27 01:01:55.933637 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933642 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933647 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933652 | orchestrator | 2025-03-27 01:01:55.933657 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.933661 | orchestrator | Thursday 27 March 2025 00:59:23 +0000 (0:00:00.590) 0:11:32.420 ******** 2025-03-27 01:01:55.933666 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933671 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933676 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933680 | orchestrator | 2025-03-27 01:01:55.933685 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.933690 | orchestrator | Thursday 27 March 2025 00:59:24 +0000 (0:00:00.518) 0:11:32.939 ******** 2025-03-27 01:01:55.933695 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933700 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933704 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933709 | orchestrator | 2025-03-27 01:01:55.933716 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.933721 | orchestrator | Thursday 27 March 2025 00:59:25 +0000 (0:00:01.028) 0:11:33.967 ******** 2025-03-27 01:01:55.933726 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.933731 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.933735 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.933740 | orchestrator | 2025-03-27 01:01:55.933745 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.933750 | orchestrator | Thursday 27 March 2025 00:59:26 +0000 (0:00:00.590) 0:11:34.557 ******** 2025-03-27 01:01:55.933755 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933759 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933764 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933769 | orchestrator | 2025-03-27 01:01:55.933774 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.933778 | orchestrator | Thursday 27 March 2025 00:59:26 +0000 (0:00:00.445) 0:11:35.003 ******** 2025-03-27 01:01:55.933783 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933788 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933793 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933800 | orchestrator | 2025-03-27 01:01:55.933805 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.933810 | orchestrator | Thursday 27 March 2025 00:59:27 +0000 (0:00:00.471) 0:11:35.475 ******** 2025-03-27 01:01:55.933815 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933819 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933824 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933829 | orchestrator | 2025-03-27 01:01:55.933834 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.933841 | orchestrator | Thursday 27 March 2025 00:59:27 +0000 (0:00:00.719) 0:11:36.194 ******** 2025-03-27 01:01:55.933846 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933851 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933856 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933860 | orchestrator | 2025-03-27 01:01:55.933865 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.933870 | orchestrator | Thursday 27 March 2025 00:59:28 +0000 (0:00:00.378) 0:11:36.573 ******** 2025-03-27 01:01:55.933875 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933880 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933884 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933889 | orchestrator | 2025-03-27 01:01:55.933894 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.933898 | orchestrator | Thursday 27 March 2025 00:59:28 +0000 (0:00:00.418) 0:11:36.991 ******** 2025-03-27 01:01:55.933903 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933908 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933913 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933917 | orchestrator | 2025-03-27 01:01:55.933922 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.933927 | orchestrator | Thursday 27 March 2025 00:59:28 +0000 (0:00:00.373) 0:11:37.364 ******** 2025-03-27 01:01:55.933932 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933937 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933942 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933946 | orchestrator | 2025-03-27 01:01:55.933951 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.933956 | orchestrator | Thursday 27 March 2025 00:59:29 +0000 (0:00:00.512) 0:11:37.877 ******** 2025-03-27 01:01:55.933961 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933966 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933970 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.933975 | orchestrator | 2025-03-27 01:01:55.933980 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.933985 | orchestrator | Thursday 27 March 2025 00:59:29 +0000 (0:00:00.316) 0:11:38.193 ******** 2025-03-27 01:01:55.933990 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.933995 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.933999 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934004 | orchestrator | 2025-03-27 01:01:55.934009 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.934042 | orchestrator | Thursday 27 March 2025 00:59:30 +0000 (0:00:00.374) 0:11:38.567 ******** 2025-03-27 01:01:55.934048 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934053 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934058 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934063 | orchestrator | 2025-03-27 01:01:55.934067 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.934072 | orchestrator | Thursday 27 March 2025 00:59:30 +0000 (0:00:00.386) 0:11:38.954 ******** 2025-03-27 01:01:55.934077 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934082 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934086 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934094 | orchestrator | 2025-03-27 01:01:55.934099 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.934104 | orchestrator | Thursday 27 March 2025 00:59:31 +0000 (0:00:00.594) 0:11:39.548 ******** 2025-03-27 01:01:55.934108 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934113 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934118 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934123 | orchestrator | 2025-03-27 01:01:55.934127 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.934132 | orchestrator | Thursday 27 March 2025 00:59:31 +0000 (0:00:00.408) 0:11:39.957 ******** 2025-03-27 01:01:55.934137 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.934142 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.934147 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934152 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.934157 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.934161 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934169 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.934176 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.934182 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934186 | orchestrator | 2025-03-27 01:01:55.934191 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.934196 | orchestrator | Thursday 27 March 2025 00:59:32 +0000 (0:00:00.517) 0:11:40.474 ******** 2025-03-27 01:01:55.934201 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-27 01:01:55.934205 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-27 01:01:55.934210 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934215 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-27 01:01:55.934220 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-27 01:01:55.934224 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934229 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-27 01:01:55.934234 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-27 01:01:55.934239 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934243 | orchestrator | 2025-03-27 01:01:55.934248 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.934253 | orchestrator | Thursday 27 March 2025 00:59:32 +0000 (0:00:00.581) 0:11:41.056 ******** 2025-03-27 01:01:55.934258 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934263 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934267 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934272 | orchestrator | 2025-03-27 01:01:55.934279 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.934284 | orchestrator | Thursday 27 March 2025 00:59:33 +0000 (0:00:01.032) 0:11:42.088 ******** 2025-03-27 01:01:55.934289 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934294 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934299 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934303 | orchestrator | 2025-03-27 01:01:55.934308 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.934313 | orchestrator | Thursday 27 March 2025 00:59:34 +0000 (0:00:00.460) 0:11:42.549 ******** 2025-03-27 01:01:55.934318 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934323 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934327 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934332 | orchestrator | 2025-03-27 01:01:55.934337 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.934344 | orchestrator | Thursday 27 March 2025 00:59:34 +0000 (0:00:00.413) 0:11:42.963 ******** 2025-03-27 01:01:55.934352 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934357 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934362 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934366 | orchestrator | 2025-03-27 01:01:55.934371 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.934376 | orchestrator | Thursday 27 March 2025 00:59:34 +0000 (0:00:00.420) 0:11:43.384 ******** 2025-03-27 01:01:55.934381 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934386 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934390 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934395 | orchestrator | 2025-03-27 01:01:55.934400 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.934405 | orchestrator | Thursday 27 March 2025 00:59:35 +0000 (0:00:00.668) 0:11:44.052 ******** 2025-03-27 01:01:55.934409 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934414 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934419 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934424 | orchestrator | 2025-03-27 01:01:55.934428 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.934433 | orchestrator | Thursday 27 March 2025 00:59:35 +0000 (0:00:00.383) 0:11:44.435 ******** 2025-03-27 01:01:55.934438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.934470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.934476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.934480 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934485 | orchestrator | 2025-03-27 01:01:55.934490 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.934495 | orchestrator | Thursday 27 March 2025 00:59:36 +0000 (0:00:00.550) 0:11:44.985 ******** 2025-03-27 01:01:55.934500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.934505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.934509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.934514 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934519 | orchestrator | 2025-03-27 01:01:55.934524 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.934529 | orchestrator | Thursday 27 March 2025 00:59:37 +0000 (0:00:00.501) 0:11:45.487 ******** 2025-03-27 01:01:55.934534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.934538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.934543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.934548 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934553 | orchestrator | 2025-03-27 01:01:55.934558 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.934562 | orchestrator | Thursday 27 March 2025 00:59:37 +0000 (0:00:00.463) 0:11:45.951 ******** 2025-03-27 01:01:55.934567 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934572 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934577 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934582 | orchestrator | 2025-03-27 01:01:55.934587 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.934591 | orchestrator | Thursday 27 March 2025 00:59:37 +0000 (0:00:00.371) 0:11:46.322 ******** 2025-03-27 01:01:55.934596 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.934601 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934606 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.934611 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934615 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.934620 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934625 | orchestrator | 2025-03-27 01:01:55.934633 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.934638 | orchestrator | Thursday 27 March 2025 00:59:38 +0000 (0:00:00.842) 0:11:47.165 ******** 2025-03-27 01:01:55.934643 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934648 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934652 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934657 | orchestrator | 2025-03-27 01:01:55.934662 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.934667 | orchestrator | Thursday 27 March 2025 00:59:39 +0000 (0:00:00.360) 0:11:47.526 ******** 2025-03-27 01:01:55.934672 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934676 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934681 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934686 | orchestrator | 2025-03-27 01:01:55.934691 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.934695 | orchestrator | Thursday 27 March 2025 00:59:39 +0000 (0:00:00.397) 0:11:47.923 ******** 2025-03-27 01:01:55.934700 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.934705 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934712 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.934717 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934722 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.934726 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934731 | orchestrator | 2025-03-27 01:01:55.934736 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.934741 | orchestrator | Thursday 27 March 2025 00:59:40 +0000 (0:00:00.523) 0:11:48.446 ******** 2025-03-27 01:01:55.934746 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.934751 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934756 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.934761 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934765 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.934770 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934775 | orchestrator | 2025-03-27 01:01:55.934780 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.934785 | orchestrator | Thursday 27 March 2025 00:59:40 +0000 (0:00:00.743) 0:11:49.190 ******** 2025-03-27 01:01:55.934789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.934794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.934799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.934804 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.934813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.934818 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.934823 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934828 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.934833 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.934837 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.934842 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934847 | orchestrator | 2025-03-27 01:01:55.934852 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.934857 | orchestrator | Thursday 27 March 2025 00:59:41 +0000 (0:00:00.664) 0:11:49.855 ******** 2025-03-27 01:01:55.934861 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934870 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934875 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934880 | orchestrator | 2025-03-27 01:01:55.934885 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.934889 | orchestrator | Thursday 27 March 2025 00:59:42 +0000 (0:00:00.879) 0:11:50.735 ******** 2025-03-27 01:01:55.934894 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.934899 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934904 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.934909 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934913 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.934918 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934923 | orchestrator | 2025-03-27 01:01:55.934928 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.934933 | orchestrator | Thursday 27 March 2025 00:59:42 +0000 (0:00:00.609) 0:11:51.344 ******** 2025-03-27 01:01:55.934938 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934943 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934948 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934953 | orchestrator | 2025-03-27 01:01:55.934957 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.934965 | orchestrator | Thursday 27 March 2025 00:59:43 +0000 (0:00:00.864) 0:11:52.209 ******** 2025-03-27 01:01:55.934970 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.934974 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.934979 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.934984 | orchestrator | 2025-03-27 01:01:55.934989 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-03-27 01:01:55.934993 | orchestrator | Thursday 27 March 2025 00:59:44 +0000 (0:00:00.577) 0:11:52.786 ******** 2025-03-27 01:01:55.934998 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935003 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935008 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-03-27 01:01:55.935012 | orchestrator | 2025-03-27 01:01:55.935017 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-03-27 01:01:55.935022 | orchestrator | Thursday 27 March 2025 00:59:45 +0000 (0:00:00.749) 0:11:53.536 ******** 2025-03-27 01:01:55.935027 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.935032 | orchestrator | 2025-03-27 01:01:55.935036 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-03-27 01:01:55.935041 | orchestrator | Thursday 27 March 2025 00:59:47 +0000 (0:00:02.151) 0:11:55.688 ******** 2025-03-27 01:01:55.935046 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-03-27 01:01:55.935052 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935057 | orchestrator | 2025-03-27 01:01:55.935062 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-03-27 01:01:55.935068 | orchestrator | Thursday 27 March 2025 00:59:47 +0000 (0:00:00.394) 0:11:56.083 ******** 2025-03-27 01:01:55.935075 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:01:55.935080 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:01:55.935085 | orchestrator | 2025-03-27 01:01:55.935093 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-03-27 01:01:55.935098 | orchestrator | Thursday 27 March 2025 00:59:54 +0000 (0:00:06.801) 0:12:02.885 ******** 2025-03-27 01:01:55.935102 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:01:55.935107 | orchestrator | 2025-03-27 01:01:55.935112 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-03-27 01:01:55.935117 | orchestrator | Thursday 27 March 2025 00:59:57 +0000 (0:00:03.121) 0:12:06.007 ******** 2025-03-27 01:01:55.935121 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.935126 | orchestrator | 2025-03-27 01:01:55.935131 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-03-27 01:01:55.935136 | orchestrator | Thursday 27 March 2025 00:59:58 +0000 (0:00:00.823) 0:12:06.831 ******** 2025-03-27 01:01:55.935141 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-27 01:01:55.935145 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-27 01:01:55.935150 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-03-27 01:01:55.935155 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-03-27 01:01:55.935162 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-03-27 01:01:55.935167 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-03-27 01:01:55.935172 | orchestrator | 2025-03-27 01:01:55.935176 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-03-27 01:01:55.935181 | orchestrator | Thursday 27 March 2025 00:59:59 +0000 (0:00:01.153) 0:12:07.984 ******** 2025-03-27 01:01:55.935186 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:01:55.935191 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.935195 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-27 01:01:55.935200 | orchestrator | 2025-03-27 01:01:55.935205 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-03-27 01:01:55.935210 | orchestrator | Thursday 27 March 2025 01:00:01 +0000 (0:00:02.055) 0:12:10.040 ******** 2025-03-27 01:01:55.935215 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 01:01:55.935219 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.935224 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935229 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 01:01:55.935234 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.935239 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935243 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 01:01:55.935248 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.935253 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935257 | orchestrator | 2025-03-27 01:01:55.935262 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-03-27 01:01:55.935267 | orchestrator | Thursday 27 March 2025 01:00:03 +0000 (0:00:01.760) 0:12:11.800 ******** 2025-03-27 01:01:55.935272 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935277 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935281 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935286 | orchestrator | 2025-03-27 01:01:55.935291 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-03-27 01:01:55.935296 | orchestrator | Thursday 27 March 2025 01:00:03 +0000 (0:00:00.392) 0:12:12.192 ******** 2025-03-27 01:01:55.935300 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.935305 | orchestrator | 2025-03-27 01:01:55.935310 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-03-27 01:01:55.935317 | orchestrator | Thursday 27 March 2025 01:00:04 +0000 (0:00:00.626) 0:12:12.818 ******** 2025-03-27 01:01:55.935322 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.935327 | orchestrator | 2025-03-27 01:01:55.935332 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-03-27 01:01:55.935337 | orchestrator | Thursday 27 March 2025 01:00:05 +0000 (0:00:00.924) 0:12:13.742 ******** 2025-03-27 01:01:55.935341 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935346 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935351 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935355 | orchestrator | 2025-03-27 01:01:55.935363 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-03-27 01:01:55.935368 | orchestrator | Thursday 27 March 2025 01:00:06 +0000 (0:00:01.317) 0:12:15.060 ******** 2025-03-27 01:01:55.935372 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935377 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935382 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935387 | orchestrator | 2025-03-27 01:01:55.935393 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-03-27 01:01:55.935398 | orchestrator | Thursday 27 March 2025 01:00:07 +0000 (0:00:01.373) 0:12:16.434 ******** 2025-03-27 01:01:55.935403 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935408 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935413 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935417 | orchestrator | 2025-03-27 01:01:55.935422 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-03-27 01:01:55.935427 | orchestrator | Thursday 27 March 2025 01:00:10 +0000 (0:00:02.183) 0:12:18.618 ******** 2025-03-27 01:01:55.935432 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935436 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935453 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935458 | orchestrator | 2025-03-27 01:01:55.935463 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-03-27 01:01:55.935468 | orchestrator | Thursday 27 March 2025 01:00:12 +0000 (0:00:01.984) 0:12:20.602 ******** 2025-03-27 01:01:55.935472 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-03-27 01:01:55.935477 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-03-27 01:01:55.935482 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-03-27 01:01:55.935487 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935492 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935496 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935501 | orchestrator | 2025-03-27 01:01:55.935506 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.935511 | orchestrator | Thursday 27 March 2025 01:00:29 +0000 (0:00:17.264) 0:12:37.866 ******** 2025-03-27 01:01:55.935516 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935520 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935525 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935530 | orchestrator | 2025-03-27 01:01:55.935535 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-03-27 01:01:55.935539 | orchestrator | Thursday 27 March 2025 01:00:30 +0000 (0:00:00.714) 0:12:38.581 ******** 2025-03-27 01:01:55.935544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.935549 | orchestrator | 2025-03-27 01:01:55.935554 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-03-27 01:01:55.935558 | orchestrator | Thursday 27 March 2025 01:00:30 +0000 (0:00:00.811) 0:12:39.392 ******** 2025-03-27 01:01:55.935563 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935568 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935573 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935583 | orchestrator | 2025-03-27 01:01:55.935588 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-03-27 01:01:55.935592 | orchestrator | Thursday 27 March 2025 01:00:31 +0000 (0:00:00.392) 0:12:39.785 ******** 2025-03-27 01:01:55.935597 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935602 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935607 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935611 | orchestrator | 2025-03-27 01:01:55.935616 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-03-27 01:01:55.935621 | orchestrator | Thursday 27 March 2025 01:00:32 +0000 (0:00:01.255) 0:12:41.041 ******** 2025-03-27 01:01:55.935626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.935630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.935635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.935640 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935645 | orchestrator | 2025-03-27 01:01:55.935650 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-03-27 01:01:55.935654 | orchestrator | Thursday 27 March 2025 01:00:33 +0000 (0:00:00.973) 0:12:42.014 ******** 2025-03-27 01:01:55.935659 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935664 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935669 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935674 | orchestrator | 2025-03-27 01:01:55.935678 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.935683 | orchestrator | Thursday 27 March 2025 01:00:34 +0000 (0:00:00.626) 0:12:42.641 ******** 2025-03-27 01:01:55.935688 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.935693 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.935697 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.935702 | orchestrator | 2025-03-27 01:01:55.935707 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-03-27 01:01:55.935712 | orchestrator | 2025-03-27 01:01:55.935717 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-03-27 01:01:55.935722 | orchestrator | Thursday 27 March 2025 01:00:36 +0000 (0:00:02.207) 0:12:44.848 ******** 2025-03-27 01:01:55.935726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.935733 | orchestrator | 2025-03-27 01:01:55.935738 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-03-27 01:01:55.935743 | orchestrator | Thursday 27 March 2025 01:00:37 +0000 (0:00:00.860) 0:12:45.709 ******** 2025-03-27 01:01:55.935748 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935753 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935758 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935762 | orchestrator | 2025-03-27 01:01:55.935767 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-03-27 01:01:55.935772 | orchestrator | Thursday 27 March 2025 01:00:37 +0000 (0:00:00.351) 0:12:46.060 ******** 2025-03-27 01:01:55.935777 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935784 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935789 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935794 | orchestrator | 2025-03-27 01:01:55.935803 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-03-27 01:01:55.935808 | orchestrator | Thursday 27 March 2025 01:00:38 +0000 (0:00:00.714) 0:12:46.774 ******** 2025-03-27 01:01:55.935813 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935821 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935826 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935830 | orchestrator | 2025-03-27 01:01:55.935835 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-03-27 01:01:55.935840 | orchestrator | Thursday 27 March 2025 01:00:39 +0000 (0:00:01.053) 0:12:47.828 ******** 2025-03-27 01:01:55.935845 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.935853 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.935858 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.935862 | orchestrator | 2025-03-27 01:01:55.935867 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-03-27 01:01:55.935872 | orchestrator | Thursday 27 March 2025 01:00:40 +0000 (0:00:00.765) 0:12:48.593 ******** 2025-03-27 01:01:55.935877 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935882 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935886 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935891 | orchestrator | 2025-03-27 01:01:55.935896 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-03-27 01:01:55.935901 | orchestrator | Thursday 27 March 2025 01:00:40 +0000 (0:00:00.406) 0:12:48.999 ******** 2025-03-27 01:01:55.935906 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935911 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935915 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935920 | orchestrator | 2025-03-27 01:01:55.935925 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-03-27 01:01:55.935930 | orchestrator | Thursday 27 March 2025 01:00:40 +0000 (0:00:00.335) 0:12:49.335 ******** 2025-03-27 01:01:55.935934 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935939 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935944 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935949 | orchestrator | 2025-03-27 01:01:55.935954 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-03-27 01:01:55.935958 | orchestrator | Thursday 27 March 2025 01:00:41 +0000 (0:00:00.689) 0:12:50.024 ******** 2025-03-27 01:01:55.935963 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935968 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.935973 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.935977 | orchestrator | 2025-03-27 01:01:55.935982 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-03-27 01:01:55.935987 | orchestrator | Thursday 27 March 2025 01:00:41 +0000 (0:00:00.349) 0:12:50.373 ******** 2025-03-27 01:01:55.935992 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.935997 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936001 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936006 | orchestrator | 2025-03-27 01:01:55.936011 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-03-27 01:01:55.936016 | orchestrator | Thursday 27 March 2025 01:00:42 +0000 (0:00:00.377) 0:12:50.751 ******** 2025-03-27 01:01:55.936020 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936025 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936030 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936035 | orchestrator | 2025-03-27 01:01:55.936040 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-03-27 01:01:55.936044 | orchestrator | Thursday 27 March 2025 01:00:42 +0000 (0:00:00.350) 0:12:51.101 ******** 2025-03-27 01:01:55.936049 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.936054 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.936059 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.936064 | orchestrator | 2025-03-27 01:01:55.936068 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-03-27 01:01:55.936073 | orchestrator | Thursday 27 March 2025 01:00:43 +0000 (0:00:01.122) 0:12:52.224 ******** 2025-03-27 01:01:55.936078 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936083 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936088 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936093 | orchestrator | 2025-03-27 01:01:55.936097 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-03-27 01:01:55.936102 | orchestrator | Thursday 27 March 2025 01:00:44 +0000 (0:00:00.348) 0:12:52.572 ******** 2025-03-27 01:01:55.936107 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936112 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936119 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936124 | orchestrator | 2025-03-27 01:01:55.936129 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-03-27 01:01:55.936134 | orchestrator | Thursday 27 March 2025 01:00:44 +0000 (0:00:00.371) 0:12:52.944 ******** 2025-03-27 01:01:55.936139 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.936144 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.936148 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.936153 | orchestrator | 2025-03-27 01:01:55.936158 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-03-27 01:01:55.936163 | orchestrator | Thursday 27 March 2025 01:00:44 +0000 (0:00:00.386) 0:12:53.330 ******** 2025-03-27 01:01:55.936167 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.936172 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.936177 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.936181 | orchestrator | 2025-03-27 01:01:55.936186 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-03-27 01:01:55.936191 | orchestrator | Thursday 27 March 2025 01:00:45 +0000 (0:00:00.695) 0:12:54.026 ******** 2025-03-27 01:01:55.936196 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.936200 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.936205 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.936210 | orchestrator | 2025-03-27 01:01:55.936215 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-03-27 01:01:55.936220 | orchestrator | Thursday 27 March 2025 01:00:45 +0000 (0:00:00.366) 0:12:54.393 ******** 2025-03-27 01:01:55.936224 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936229 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936234 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936239 | orchestrator | 2025-03-27 01:01:55.936244 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-03-27 01:01:55.936250 | orchestrator | Thursday 27 March 2025 01:00:46 +0000 (0:00:00.364) 0:12:54.757 ******** 2025-03-27 01:01:55.936255 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936260 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936265 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936270 | orchestrator | 2025-03-27 01:01:55.936277 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-03-27 01:01:55.936282 | orchestrator | Thursday 27 March 2025 01:00:46 +0000 (0:00:00.356) 0:12:55.114 ******** 2025-03-27 01:01:55.936287 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936294 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936299 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936303 | orchestrator | 2025-03-27 01:01:55.936308 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-03-27 01:01:55.936313 | orchestrator | Thursday 27 March 2025 01:00:47 +0000 (0:00:00.639) 0:12:55.753 ******** 2025-03-27 01:01:55.936318 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.936323 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.936327 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.936332 | orchestrator | 2025-03-27 01:01:55.936337 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-03-27 01:01:55.936342 | orchestrator | Thursday 27 March 2025 01:00:47 +0000 (0:00:00.392) 0:12:56.146 ******** 2025-03-27 01:01:55.936346 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936351 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936356 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936361 | orchestrator | 2025-03-27 01:01:55.936366 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-03-27 01:01:55.936370 | orchestrator | Thursday 27 March 2025 01:00:48 +0000 (0:00:00.457) 0:12:56.603 ******** 2025-03-27 01:01:55.936375 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936380 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936385 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936394 | orchestrator | 2025-03-27 01:01:55.936399 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-03-27 01:01:55.936404 | orchestrator | Thursday 27 March 2025 01:00:48 +0000 (0:00:00.375) 0:12:56.978 ******** 2025-03-27 01:01:55.936409 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936414 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936418 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936423 | orchestrator | 2025-03-27 01:01:55.936428 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-03-27 01:01:55.936433 | orchestrator | Thursday 27 March 2025 01:00:49 +0000 (0:00:00.673) 0:12:57.652 ******** 2025-03-27 01:01:55.936438 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936454 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936459 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936464 | orchestrator | 2025-03-27 01:01:55.936469 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-03-27 01:01:55.936474 | orchestrator | Thursday 27 March 2025 01:00:49 +0000 (0:00:00.376) 0:12:58.029 ******** 2025-03-27 01:01:55.936479 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936484 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936488 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936493 | orchestrator | 2025-03-27 01:01:55.936498 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-03-27 01:01:55.936503 | orchestrator | Thursday 27 March 2025 01:00:50 +0000 (0:00:00.430) 0:12:58.459 ******** 2025-03-27 01:01:55.936508 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936513 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936518 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936522 | orchestrator | 2025-03-27 01:01:55.936527 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-03-27 01:01:55.936532 | orchestrator | Thursday 27 March 2025 01:00:50 +0000 (0:00:00.344) 0:12:58.803 ******** 2025-03-27 01:01:55.936537 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936542 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936546 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936551 | orchestrator | 2025-03-27 01:01:55.936556 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-03-27 01:01:55.936561 | orchestrator | Thursday 27 March 2025 01:00:51 +0000 (0:00:00.695) 0:12:59.499 ******** 2025-03-27 01:01:55.936566 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936571 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936575 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936580 | orchestrator | 2025-03-27 01:01:55.936585 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-03-27 01:01:55.936590 | orchestrator | Thursday 27 March 2025 01:00:51 +0000 (0:00:00.374) 0:12:59.873 ******** 2025-03-27 01:01:55.936595 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936600 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936605 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936610 | orchestrator | 2025-03-27 01:01:55.936614 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-03-27 01:01:55.936619 | orchestrator | Thursday 27 March 2025 01:00:51 +0000 (0:00:00.363) 0:13:00.237 ******** 2025-03-27 01:01:55.936624 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936629 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936634 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936638 | orchestrator | 2025-03-27 01:01:55.936643 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-03-27 01:01:55.936648 | orchestrator | Thursday 27 March 2025 01:00:52 +0000 (0:00:00.373) 0:13:00.611 ******** 2025-03-27 01:01:55.936653 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936658 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936663 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936670 | orchestrator | 2025-03-27 01:01:55.936675 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-03-27 01:01:55.936680 | orchestrator | Thursday 27 March 2025 01:00:52 +0000 (0:00:00.658) 0:13:01.269 ******** 2025-03-27 01:01:55.936685 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936690 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936697 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936702 | orchestrator | 2025-03-27 01:01:55.936707 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-03-27 01:01:55.936711 | orchestrator | Thursday 27 March 2025 01:00:53 +0000 (0:00:00.336) 0:13:01.606 ******** 2025-03-27 01:01:55.936716 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.936721 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-03-27 01:01:55.936726 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936733 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.936739 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-03-27 01:01:55.936743 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936748 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.936753 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-03-27 01:01:55.936758 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936763 | orchestrator | 2025-03-27 01:01:55.936768 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-03-27 01:01:55.936772 | orchestrator | Thursday 27 March 2025 01:00:53 +0000 (0:00:00.437) 0:13:02.043 ******** 2025-03-27 01:01:55.936777 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-03-27 01:01:55.936784 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-03-27 01:01:55.936789 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936794 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-03-27 01:01:55.936799 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-03-27 01:01:55.936803 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936808 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-03-27 01:01:55.936813 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-03-27 01:01:55.936818 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936823 | orchestrator | 2025-03-27 01:01:55.936827 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-03-27 01:01:55.936832 | orchestrator | Thursday 27 March 2025 01:00:54 +0000 (0:00:00.410) 0:13:02.454 ******** 2025-03-27 01:01:55.936837 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936841 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936846 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936851 | orchestrator | 2025-03-27 01:01:55.936856 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-03-27 01:01:55.936860 | orchestrator | Thursday 27 March 2025 01:00:54 +0000 (0:00:00.727) 0:13:03.181 ******** 2025-03-27 01:01:55.936865 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936872 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936877 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936882 | orchestrator | 2025-03-27 01:01:55.936887 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:01:55.936892 | orchestrator | Thursday 27 March 2025 01:00:55 +0000 (0:00:00.369) 0:13:03.551 ******** 2025-03-27 01:01:55.936896 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936901 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936906 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936911 | orchestrator | 2025-03-27 01:01:55.936915 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:01:55.936920 | orchestrator | Thursday 27 March 2025 01:00:55 +0000 (0:00:00.360) 0:13:03.911 ******** 2025-03-27 01:01:55.936928 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936933 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936938 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936942 | orchestrator | 2025-03-27 01:01:55.936947 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:01:55.936952 | orchestrator | Thursday 27 March 2025 01:00:55 +0000 (0:00:00.336) 0:13:04.248 ******** 2025-03-27 01:01:55.936956 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936961 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936966 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936971 | orchestrator | 2025-03-27 01:01:55.936975 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:01:55.936980 | orchestrator | Thursday 27 March 2025 01:00:56 +0000 (0:00:00.657) 0:13:04.905 ******** 2025-03-27 01:01:55.936985 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.936990 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.936995 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.936999 | orchestrator | 2025-03-27 01:01:55.937004 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:01:55.937009 | orchestrator | Thursday 27 March 2025 01:00:56 +0000 (0:00:00.351) 0:13:05.256 ******** 2025-03-27 01:01:55.937014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.937018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.937023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.937028 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937033 | orchestrator | 2025-03-27 01:01:55.937037 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:01:55.937042 | orchestrator | Thursday 27 March 2025 01:00:57 +0000 (0:00:00.467) 0:13:05.724 ******** 2025-03-27 01:01:55.937047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.937052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.937056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.937061 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937066 | orchestrator | 2025-03-27 01:01:55.937071 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:01:55.937075 | orchestrator | Thursday 27 March 2025 01:00:57 +0000 (0:00:00.461) 0:13:06.185 ******** 2025-03-27 01:01:55.937080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.937085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.937091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.937096 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937101 | orchestrator | 2025-03-27 01:01:55.937106 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.937111 | orchestrator | Thursday 27 March 2025 01:00:58 +0000 (0:00:00.458) 0:13:06.643 ******** 2025-03-27 01:01:55.937115 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937120 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937125 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937130 | orchestrator | 2025-03-27 01:01:55.937135 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:01:55.937139 | orchestrator | Thursday 27 March 2025 01:00:58 +0000 (0:00:00.364) 0:13:07.008 ******** 2025-03-27 01:01:55.937144 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.937149 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937154 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.937159 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937163 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.937168 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937173 | orchestrator | 2025-03-27 01:01:55.937181 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:01:55.937185 | orchestrator | Thursday 27 March 2025 01:00:59 +0000 (0:00:00.851) 0:13:07.859 ******** 2025-03-27 01:01:55.937190 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937195 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937200 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937204 | orchestrator | 2025-03-27 01:01:55.937209 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:01:55.937214 | orchestrator | Thursday 27 March 2025 01:00:59 +0000 (0:00:00.332) 0:13:08.192 ******** 2025-03-27 01:01:55.937219 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937223 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937228 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937233 | orchestrator | 2025-03-27 01:01:55.937238 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:01:55.937242 | orchestrator | Thursday 27 March 2025 01:01:00 +0000 (0:00:00.341) 0:13:08.533 ******** 2025-03-27 01:01:55.937247 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:01:55.937252 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937257 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:01:55.937262 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937267 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:01:55.937271 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937276 | orchestrator | 2025-03-27 01:01:55.937281 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:01:55.937286 | orchestrator | Thursday 27 March 2025 01:01:00 +0000 (0:00:00.449) 0:13:08.983 ******** 2025-03-27 01:01:55.937290 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.937298 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937303 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.937308 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937312 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:01:55.937317 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937322 | orchestrator | 2025-03-27 01:01:55.937327 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:01:55.937332 | orchestrator | Thursday 27 March 2025 01:01:01 +0000 (0:00:00.661) 0:13:09.644 ******** 2025-03-27 01:01:55.937337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.937341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.937346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.937351 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937356 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:01:55.937361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:01:55.937365 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:01:55.937370 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937375 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:01:55.937380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:01:55.937384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:01:55.937389 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937394 | orchestrator | 2025-03-27 01:01:55.937399 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-03-27 01:01:55.937403 | orchestrator | Thursday 27 March 2025 01:01:01 +0000 (0:00:00.706) 0:13:10.351 ******** 2025-03-27 01:01:55.937408 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937416 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937421 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937425 | orchestrator | 2025-03-27 01:01:55.937430 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-03-27 01:01:55.937435 | orchestrator | Thursday 27 March 2025 01:01:02 +0000 (0:00:00.925) 0:13:11.276 ******** 2025-03-27 01:01:55.937449 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.937456 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937461 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.937466 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937471 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.937476 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937480 | orchestrator | 2025-03-27 01:01:55.937487 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-03-27 01:01:55.937492 | orchestrator | Thursday 27 March 2025 01:01:03 +0000 (0:00:00.663) 0:13:11.940 ******** 2025-03-27 01:01:55.937497 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937502 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937506 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937511 | orchestrator | 2025-03-27 01:01:55.937516 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-03-27 01:01:55.937521 | orchestrator | Thursday 27 March 2025 01:01:04 +0000 (0:00:00.873) 0:13:12.813 ******** 2025-03-27 01:01:55.937525 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937530 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937535 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937540 | orchestrator | 2025-03-27 01:01:55.937545 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-03-27 01:01:55.937549 | orchestrator | Thursday 27 March 2025 01:01:04 +0000 (0:00:00.588) 0:13:13.402 ******** 2025-03-27 01:01:55.937554 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.937559 | orchestrator | 2025-03-27 01:01:55.937564 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-03-27 01:01:55.937568 | orchestrator | Thursday 27 March 2025 01:01:05 +0000 (0:00:00.941) 0:13:14.343 ******** 2025-03-27 01:01:55.937573 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-03-27 01:01:55.937578 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-03-27 01:01:55.937583 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-03-27 01:01:55.937587 | orchestrator | 2025-03-27 01:01:55.937592 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-03-27 01:01:55.937597 | orchestrator | Thursday 27 March 2025 01:01:06 +0000 (0:00:00.760) 0:13:15.104 ******** 2025-03-27 01:01:55.937602 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:01:55.937606 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.937611 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-27 01:01:55.937616 | orchestrator | 2025-03-27 01:01:55.937621 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-03-27 01:01:55.937626 | orchestrator | Thursday 27 March 2025 01:01:08 +0000 (0:00:01.983) 0:13:17.087 ******** 2025-03-27 01:01:55.937630 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 01:01:55.937635 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-03-27 01:01:55.937640 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.937645 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 01:01:55.937650 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-03-27 01:01:55.937654 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.937659 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 01:01:55.937664 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-03-27 01:01:55.937669 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.937677 | orchestrator | 2025-03-27 01:01:55.937682 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-03-27 01:01:55.937687 | orchestrator | Thursday 27 March 2025 01:01:09 +0000 (0:00:01.285) 0:13:18.373 ******** 2025-03-27 01:01:55.937692 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937697 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937701 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937706 | orchestrator | 2025-03-27 01:01:55.937711 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-03-27 01:01:55.937716 | orchestrator | Thursday 27 March 2025 01:01:10 +0000 (0:00:00.625) 0:13:18.998 ******** 2025-03-27 01:01:55.937721 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937725 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937730 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937735 | orchestrator | 2025-03-27 01:01:55.937739 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-03-27 01:01:55.937744 | orchestrator | Thursday 27 March 2025 01:01:10 +0000 (0:00:00.345) 0:13:19.343 ******** 2025-03-27 01:01:55.937749 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-03-27 01:01:55.937754 | orchestrator | 2025-03-27 01:01:55.937758 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-03-27 01:01:55.937763 | orchestrator | Thursday 27 March 2025 01:01:11 +0000 (0:00:00.254) 0:13:19.598 ******** 2025-03-27 01:01:55.937768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937795 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937800 | orchestrator | 2025-03-27 01:01:55.937804 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-03-27 01:01:55.937809 | orchestrator | Thursday 27 March 2025 01:01:12 +0000 (0:00:00.960) 0:13:20.558 ******** 2025-03-27 01:01:55.937814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937843 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937848 | orchestrator | 2025-03-27 01:01:55.937853 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-03-27 01:01:55.937857 | orchestrator | Thursday 27 March 2025 01:01:13 +0000 (0:00:01.098) 0:13:21.656 ******** 2025-03-27 01:01:55.937862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-03-27 01:01:55.937890 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937894 | orchestrator | 2025-03-27 01:01:55.937899 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-03-27 01:01:55.937904 | orchestrator | Thursday 27 March 2025 01:01:13 +0000 (0:00:00.696) 0:13:22.353 ******** 2025-03-27 01:01:55.937909 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-27 01:01:55.937913 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-27 01:01:55.937918 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-27 01:01:55.937923 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-27 01:01:55.937928 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-03-27 01:01:55.937933 | orchestrator | 2025-03-27 01:01:55.937938 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-03-27 01:01:55.937942 | orchestrator | Thursday 27 March 2025 01:01:38 +0000 (0:00:24.956) 0:13:47.309 ******** 2025-03-27 01:01:55.937947 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937952 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937957 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937961 | orchestrator | 2025-03-27 01:01:55.937966 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-03-27 01:01:55.937971 | orchestrator | Thursday 27 March 2025 01:01:39 +0000 (0:00:00.534) 0:13:47.843 ******** 2025-03-27 01:01:55.937976 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.937980 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.937985 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.937990 | orchestrator | 2025-03-27 01:01:55.937997 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-03-27 01:01:55.938002 | orchestrator | Thursday 27 March 2025 01:01:39 +0000 (0:00:00.363) 0:13:48.206 ******** 2025-03-27 01:01:55.938007 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.938012 | orchestrator | 2025-03-27 01:01:55.938031 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-03-27 01:01:55.938036 | orchestrator | Thursday 27 March 2025 01:01:40 +0000 (0:00:00.573) 0:13:48.780 ******** 2025-03-27 01:01:55.938040 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.938045 | orchestrator | 2025-03-27 01:01:55.938050 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-03-27 01:01:55.938055 | orchestrator | Thursday 27 March 2025 01:01:41 +0000 (0:00:00.828) 0:13:49.609 ******** 2025-03-27 01:01:55.938059 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938064 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938069 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938074 | orchestrator | 2025-03-27 01:01:55.938084 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-03-27 01:01:55.938089 | orchestrator | Thursday 27 March 2025 01:01:42 +0000 (0:00:01.323) 0:13:50.932 ******** 2025-03-27 01:01:55.938093 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938098 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938105 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938110 | orchestrator | 2025-03-27 01:01:55.938115 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-03-27 01:01:55.938120 | orchestrator | Thursday 27 March 2025 01:01:43 +0000 (0:00:01.282) 0:13:52.215 ******** 2025-03-27 01:01:55.938124 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938129 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938134 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938139 | orchestrator | 2025-03-27 01:01:55.938143 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-03-27 01:01:55.938148 | orchestrator | Thursday 27 March 2025 01:01:45 +0000 (0:00:02.093) 0:13:54.308 ******** 2025-03-27 01:01:55.938153 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.938158 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.938163 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-03-27 01:01:55.938168 | orchestrator | 2025-03-27 01:01:55.938172 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-03-27 01:01:55.938177 | orchestrator | Thursday 27 March 2025 01:01:47 +0000 (0:00:01.999) 0:13:56.308 ******** 2025-03-27 01:01:55.938182 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.938187 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:01:55.938191 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:01:55.938196 | orchestrator | 2025-03-27 01:01:55.938201 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-03-27 01:01:55.938206 | orchestrator | Thursday 27 March 2025 01:01:49 +0000 (0:00:01.265) 0:13:57.573 ******** 2025-03-27 01:01:55.938211 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938215 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938220 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938225 | orchestrator | 2025-03-27 01:01:55.938230 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-03-27 01:01:55.938234 | orchestrator | Thursday 27 March 2025 01:01:49 +0000 (0:00:00.766) 0:13:58.339 ******** 2025-03-27 01:01:55.938239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:01:55.938244 | orchestrator | 2025-03-27 01:01:55.938249 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-03-27 01:01:55.938254 | orchestrator | Thursday 27 March 2025 01:01:50 +0000 (0:00:00.897) 0:13:59.237 ******** 2025-03-27 01:01:55.938258 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.938263 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.938268 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.938273 | orchestrator | 2025-03-27 01:01:55.938278 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-03-27 01:01:55.938282 | orchestrator | Thursday 27 March 2025 01:01:51 +0000 (0:00:00.349) 0:13:59.587 ******** 2025-03-27 01:01:55.938287 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938292 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938297 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938301 | orchestrator | 2025-03-27 01:01:55.938306 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-03-27 01:01:55.938311 | orchestrator | Thursday 27 March 2025 01:01:52 +0000 (0:00:01.345) 0:14:00.932 ******** 2025-03-27 01:01:55.938316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:01:55.938323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:01:55.938328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:01:55.938333 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:01:55.938338 | orchestrator | 2025-03-27 01:01:55.938343 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-03-27 01:01:55.938347 | orchestrator | Thursday 27 March 2025 01:01:53 +0000 (0:00:00.959) 0:14:01.891 ******** 2025-03-27 01:01:55.938352 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:01:55.938357 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:01:55.938362 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:01:55.938366 | orchestrator | 2025-03-27 01:01:55.938371 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-03-27 01:01:55.938376 | orchestrator | Thursday 27 March 2025 01:01:53 +0000 (0:00:00.397) 0:14:02.289 ******** 2025-03-27 01:01:55.938381 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:01:55.938385 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:01:55.938390 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:01:55.938395 | orchestrator | 2025-03-27 01:01:55.938400 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:01:55.938404 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-03-27 01:01:55.938410 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-03-27 01:01:55.938415 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-03-27 01:01:55.938420 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-03-27 01:01:55.938425 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-03-27 01:01:55.938432 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-03-27 01:01:58.962088 | orchestrator | 2025-03-27 01:01:58.962208 | orchestrator | 2025-03-27 01:01:58.962226 | orchestrator | 2025-03-27 01:01:58.962242 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:01:58.962258 | orchestrator | Thursday 27 March 2025 01:01:55 +0000 (0:00:01.335) 0:14:03.624 ******** 2025-03-27 01:01:58.962272 | orchestrator | =============================================================================== 2025-03-27 01:01:58.962304 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.77s 2025-03-27 01:01:58.962319 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 28.92s 2025-03-27 01:01:58.962335 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.96s 2025-03-27 01:01:58.962349 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.57s 2025-03-27 01:01:58.962363 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.26s 2025-03-27 01:01:58.962377 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.84s 2025-03-27 01:01:58.962391 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.79s 2025-03-27 01:01:58.962405 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.36s 2025-03-27 01:01:58.962420 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 7.78s 2025-03-27 01:01:58.962433 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.64s 2025-03-27 01:01:58.962491 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.80s 2025-03-27 01:01:58.962526 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.69s 2025-03-27 01:01:58.962541 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 6.48s 2025-03-27 01:01:58.962555 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.40s 2025-03-27 01:01:58.962568 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.07s 2025-03-27 01:01:58.962583 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.42s 2025-03-27 01:01:58.962598 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.37s 2025-03-27 01:01:58.962614 | orchestrator | ceph-container-common : enable ceph.target ------------------------------ 3.81s 2025-03-27 01:01:58.962630 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.64s 2025-03-27 01:01:58.962645 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.62s 2025-03-27 01:01:58.962661 | orchestrator | 2025-03-27 01:01:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:01:58.962679 | orchestrator | 2025-03-27 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:01:58.962713 | orchestrator | 2025-03-27 01:01:58 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:01:58.962968 | orchestrator | 2025-03-27 01:01:58 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:01:58.964692 | orchestrator | 2025-03-27 01:01:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:02.034772 | orchestrator | 2025-03-27 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:02.034912 | orchestrator | 2025-03-27 01:02:02 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:02.036114 | orchestrator | 2025-03-27 01:02:02 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:02.040413 | orchestrator | 2025-03-27 01:02:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:05.092523 | orchestrator | 2025-03-27 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:05.092664 | orchestrator | 2025-03-27 01:02:05 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:05.093596 | orchestrator | 2025-03-27 01:02:05 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:05.096224 | orchestrator | 2025-03-27 01:02:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:05.097093 | orchestrator | 2025-03-27 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:08.151374 | orchestrator | 2025-03-27 01:02:08 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:08.153247 | orchestrator | 2025-03-27 01:02:08 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:08.156407 | orchestrator | 2025-03-27 01:02:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:11.211539 | orchestrator | 2025-03-27 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:11.211669 | orchestrator | 2025-03-27 01:02:11 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:11.214573 | orchestrator | 2025-03-27 01:02:11 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:11.219368 | orchestrator | 2025-03-27 01:02:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:14.266864 | orchestrator | 2025-03-27 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:14.267066 | orchestrator | 2025-03-27 01:02:14 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:14.267754 | orchestrator | 2025-03-27 01:02:14 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:14.269606 | orchestrator | 2025-03-27 01:02:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:14.269690 | orchestrator | 2025-03-27 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:17.329818 | orchestrator | 2025-03-27 01:02:17 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:17.330081 | orchestrator | 2025-03-27 01:02:17 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:17.331802 | orchestrator | 2025-03-27 01:02:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:20.390826 | orchestrator | 2025-03-27 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:20.390975 | orchestrator | 2025-03-27 01:02:20 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:20.393127 | orchestrator | 2025-03-27 01:02:20 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:20.397220 | orchestrator | 2025-03-27 01:02:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:23.451296 | orchestrator | 2025-03-27 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:23.451432 | orchestrator | 2025-03-27 01:02:23 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:23.456119 | orchestrator | 2025-03-27 01:02:23 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:26.512810 | orchestrator | 2025-03-27 01:02:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:26.512930 | orchestrator | 2025-03-27 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:26.512966 | orchestrator | 2025-03-27 01:02:26 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:26.517289 | orchestrator | 2025-03-27 01:02:26 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:26.518256 | orchestrator | 2025-03-27 01:02:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:29.563674 | orchestrator | 2025-03-27 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:29.563803 | orchestrator | 2025-03-27 01:02:29 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:29.568748 | orchestrator | 2025-03-27 01:02:29 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:29.572220 | orchestrator | 2025-03-27 01:02:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:32.623525 | orchestrator | 2025-03-27 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:32.624323 | orchestrator | 2025-03-27 01:02:32 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:32.624619 | orchestrator | 2025-03-27 01:02:32 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:32.624655 | orchestrator | 2025-03-27 01:02:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:35.690789 | orchestrator | 2025-03-27 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:35.690929 | orchestrator | 2025-03-27 01:02:35 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state STARTED 2025-03-27 01:02:35.692728 | orchestrator | 2025-03-27 01:02:35 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:35.695666 | orchestrator | 2025-03-27 01:02:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:38.744150 | orchestrator | 2025-03-27 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:38.744282 | orchestrator | 2025-03-27 01:02:38.744303 | orchestrator | 2025-03-27 01:02:38.744318 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-03-27 01:02:38.744332 | orchestrator | 2025-03-27 01:02:38.744347 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-27 01:02:38.744361 | orchestrator | Thursday 27 March 2025 00:58:49 +0000 (0:00:00.170) 0:00:00.170 ******** 2025-03-27 01:02:38.744376 | orchestrator | ok: [localhost] => { 2025-03-27 01:02:38.744391 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-03-27 01:02:38.744406 | orchestrator | } 2025-03-27 01:02:38.744420 | orchestrator | 2025-03-27 01:02:38.744434 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-03-27 01:02:38.744498 | orchestrator | Thursday 27 March 2025 00:58:49 +0000 (0:00:00.056) 0:00:00.226 ******** 2025-03-27 01:02:38.744514 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-03-27 01:02:38.744530 | orchestrator | ...ignoring 2025-03-27 01:02:38.744544 | orchestrator | 2025-03-27 01:02:38.744558 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-03-27 01:02:38.744572 | orchestrator | Thursday 27 March 2025 00:58:52 +0000 (0:00:02.551) 0:00:02.778 ******** 2025-03-27 01:02:38.744585 | orchestrator | skipping: [localhost] 2025-03-27 01:02:38.744599 | orchestrator | 2025-03-27 01:02:38.744613 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-03-27 01:02:38.744628 | orchestrator | Thursday 27 March 2025 00:58:52 +0000 (0:00:00.070) 0:00:02.848 ******** 2025-03-27 01:02:38.744641 | orchestrator | ok: [localhost] 2025-03-27 01:02:38.744656 | orchestrator | 2025-03-27 01:02:38.744669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:02:38.744683 | orchestrator | 2025-03-27 01:02:38.744697 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:02:38.744711 | orchestrator | Thursday 27 March 2025 00:58:52 +0000 (0:00:00.153) 0:00:03.002 ******** 2025-03-27 01:02:38.744728 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.744744 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.744761 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.744777 | orchestrator | 2025-03-27 01:02:38.744808 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:02:38.744823 | orchestrator | Thursday 27 March 2025 00:58:53 +0000 (0:00:00.483) 0:00:03.485 ******** 2025-03-27 01:02:38.744837 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-03-27 01:02:38.744856 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-03-27 01:02:38.744870 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-03-27 01:02:38.744884 | orchestrator | 2025-03-27 01:02:38.744899 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-03-27 01:02:38.744912 | orchestrator | 2025-03-27 01:02:38.744926 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-03-27 01:02:38.744940 | orchestrator | Thursday 27 March 2025 00:58:53 +0000 (0:00:00.483) 0:00:03.969 ******** 2025-03-27 01:02:38.744954 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:02:38.744968 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-27 01:02:38.744982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-27 01:02:38.744996 | orchestrator | 2025-03-27 01:02:38.745009 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-27 01:02:38.745047 | orchestrator | Thursday 27 March 2025 00:58:54 +0000 (0:00:00.707) 0:00:04.676 ******** 2025-03-27 01:02:38.745062 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:02:38.745077 | orchestrator | 2025-03-27 01:02:38.745090 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-03-27 01:02:38.745104 | orchestrator | Thursday 27 March 2025 00:58:55 +0000 (0:00:00.714) 0:00:05.390 ******** 2025-03-27 01:02:38.745136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745251 | orchestrator | 2025-03-27 01:02:38.745272 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-03-27 01:02:38.745286 | orchestrator | Thursday 27 March 2025 00:59:00 +0000 (0:00:05.002) 0:00:10.393 ******** 2025-03-27 01:02:38.745300 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.745320 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.745335 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.745349 | orchestrator | 2025-03-27 01:02:38.745362 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-03-27 01:02:38.745376 | orchestrator | Thursday 27 March 2025 00:59:01 +0000 (0:00:00.946) 0:00:11.339 ******** 2025-03-27 01:02:38.745390 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.745404 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.745418 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.745431 | orchestrator | 2025-03-27 01:02:38.745445 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-03-27 01:02:38.745478 | orchestrator | Thursday 27 March 2025 00:59:02 +0000 (0:00:01.576) 0:00:12.916 ******** 2025-03-27 01:02:38.745506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.745624 | orchestrator | 2025-03-27 01:02:38.745638 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-03-27 01:02:38.745652 | orchestrator | Thursday 27 March 2025 00:59:09 +0000 (0:00:07.075) 0:00:19.991 ******** 2025-03-27 01:02:38.745666 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.745680 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.745694 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.745708 | orchestrator | 2025-03-27 01:02:38.745721 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-03-27 01:02:38.745735 | orchestrator | Thursday 27 March 2025 00:59:10 +0000 (0:00:01.107) 0:00:21.099 ******** 2025-03-27 01:02:38.745868 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:02:38.745888 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.745902 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:02:38.745916 | orchestrator | 2025-03-27 01:02:38.745930 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-03-27 01:02:38.745950 | orchestrator | Thursday 27 March 2025 00:59:20 +0000 (0:00:09.759) 0:00:30.858 ******** 2025-03-27 01:02:38.745975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.745992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.746067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-27 01:02:38.746096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.746112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.746163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-03-27 01:02:38.746179 | orchestrator | 2025-03-27 01:02:38.746193 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-03-27 01:02:38.746207 | orchestrator | Thursday 27 March 2025 00:59:25 +0000 (0:00:05.154) 0:00:36.012 ******** 2025-03-27 01:02:38.746221 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.746235 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:02:38.746248 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:02:38.746262 | orchestrator | 2025-03-27 01:02:38.746276 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-03-27 01:02:38.746290 | orchestrator | Thursday 27 March 2025 00:59:26 +0000 (0:00:01.134) 0:00:37.146 ******** 2025-03-27 01:02:38.746304 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.746319 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.746332 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.746346 | orchestrator | 2025-03-27 01:02:38.746360 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-03-27 01:02:38.746374 | orchestrator | Thursday 27 March 2025 00:59:27 +0000 (0:00:00.549) 0:00:37.695 ******** 2025-03-27 01:02:38.746388 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.746401 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.746415 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.746429 | orchestrator | 2025-03-27 01:02:38.746443 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-03-27 01:02:38.746476 | orchestrator | Thursday 27 March 2025 00:59:28 +0000 (0:00:00.590) 0:00:38.286 ******** 2025-03-27 01:02:38.746491 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-03-27 01:02:38.746508 | orchestrator | ...ignoring 2025-03-27 01:02:38.746525 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-03-27 01:02:38.746541 | orchestrator | ...ignoring 2025-03-27 01:02:38.746558 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-03-27 01:02:38.746574 | orchestrator | ...ignoring 2025-03-27 01:02:38.746588 | orchestrator | 2025-03-27 01:02:38.746602 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-03-27 01:02:38.746616 | orchestrator | Thursday 27 March 2025 00:59:38 +0000 (0:00:10.923) 0:00:49.209 ******** 2025-03-27 01:02:38.746630 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.746643 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.746657 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.746671 | orchestrator | 2025-03-27 01:02:38.746690 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-03-27 01:02:38.746704 | orchestrator | Thursday 27 March 2025 00:59:39 +0000 (0:00:00.655) 0:00:49.865 ******** 2025-03-27 01:02:38.746718 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.746732 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.746759 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.746774 | orchestrator | 2025-03-27 01:02:38.746787 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-03-27 01:02:38.746801 | orchestrator | Thursday 27 March 2025 00:59:40 +0000 (0:00:00.730) 0:00:50.595 ******** 2025-03-27 01:02:38.746815 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.746829 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.746842 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.746856 | orchestrator | 2025-03-27 01:02:38.746876 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-03-27 01:02:38.746890 | orchestrator | Thursday 27 March 2025 00:59:40 +0000 (0:00:00.504) 0:00:51.100 ******** 2025-03-27 01:02:38.746904 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.746918 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.746932 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.746945 | orchestrator | 2025-03-27 01:02:38.746959 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-03-27 01:02:38.746973 | orchestrator | Thursday 27 March 2025 00:59:41 +0000 (0:00:00.634) 0:00:51.734 ******** 2025-03-27 01:02:38.746986 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.747000 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.747014 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.747028 | orchestrator | 2025-03-27 01:02:38.747042 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-03-27 01:02:38.747055 | orchestrator | Thursday 27 March 2025 00:59:42 +0000 (0:00:00.621) 0:00:52.355 ******** 2025-03-27 01:02:38.747069 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.747083 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.747097 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.747110 | orchestrator | 2025-03-27 01:02:38.747124 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-27 01:02:38.747137 | orchestrator | Thursday 27 March 2025 00:59:42 +0000 (0:00:00.545) 0:00:52.901 ******** 2025-03-27 01:02:38.747151 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.747165 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.747178 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-03-27 01:02:38.747192 | orchestrator | 2025-03-27 01:02:38.747205 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-03-27 01:02:38.747219 | orchestrator | Thursday 27 March 2025 00:59:43 +0000 (0:00:00.564) 0:00:53.466 ******** 2025-03-27 01:02:38.747233 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.747247 | orchestrator | 2025-03-27 01:02:38.747260 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-03-27 01:02:38.747274 | orchestrator | Thursday 27 March 2025 00:59:54 +0000 (0:00:11.018) 0:01:04.485 ******** 2025-03-27 01:02:38.747288 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.747301 | orchestrator | 2025-03-27 01:02:38.747315 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-03-27 01:02:38.747329 | orchestrator | Thursday 27 March 2025 00:59:54 +0000 (0:00:00.140) 0:01:04.626 ******** 2025-03-27 01:02:38.747342 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.747356 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.747369 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.747383 | orchestrator | 2025-03-27 01:02:38.747508 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-03-27 01:02:38.747531 | orchestrator | Thursday 27 March 2025 00:59:55 +0000 (0:00:01.286) 0:01:05.912 ******** 2025-03-27 01:02:38.747546 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.747561 | orchestrator | 2025-03-27 01:02:38.747575 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-03-27 01:02:38.747590 | orchestrator | Thursday 27 March 2025 01:00:05 +0000 (0:00:10.148) 0:01:16.060 ******** 2025-03-27 01:02:38.747604 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-03-27 01:02:38.747628 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.747643 | orchestrator | 2025-03-27 01:02:38.747658 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-03-27 01:02:38.747672 | orchestrator | Thursday 27 March 2025 01:00:13 +0000 (0:00:07.282) 0:01:23.343 ******** 2025-03-27 01:02:38.747686 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.747701 | orchestrator | 2025-03-27 01:02:38.747715 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-03-27 01:02:38.747730 | orchestrator | Thursday 27 March 2025 01:00:15 +0000 (0:00:02.770) 0:01:26.113 ******** 2025-03-27 01:02:38.747744 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.747758 | orchestrator | 2025-03-27 01:02:38.747773 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-03-27 01:02:38.747787 | orchestrator | Thursday 27 March 2025 01:00:16 +0000 (0:00:00.130) 0:01:26.244 ******** 2025-03-27 01:02:38.747802 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.747816 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.747837 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.747852 | orchestrator | 2025-03-27 01:02:38.747867 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-03-27 01:02:38.747881 | orchestrator | Thursday 27 March 2025 01:00:16 +0000 (0:00:00.477) 0:01:26.721 ******** 2025-03-27 01:02:38.747895 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.747910 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:02:38.747924 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:02:38.747938 | orchestrator | 2025-03-27 01:02:38.747953 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-03-27 01:02:38.747971 | orchestrator | Thursday 27 March 2025 01:00:16 +0000 (0:00:00.513) 0:01:27.234 ******** 2025-03-27 01:02:38.747986 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-03-27 01:02:38.748001 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.748015 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:02:38.748030 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:02:38.748044 | orchestrator | 2025-03-27 01:02:38.748059 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-03-27 01:02:38.748073 | orchestrator | skipping: no hosts matched 2025-03-27 01:02:38.748088 | orchestrator | 2025-03-27 01:02:38.748102 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-03-27 01:02:38.748116 | orchestrator | 2025-03-27 01:02:38.748131 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-27 01:02:38.748146 | orchestrator | Thursday 27 March 2025 01:00:37 +0000 (0:00:20.128) 0:01:47.363 ******** 2025-03-27 01:02:38.748163 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:02:38.748179 | orchestrator | 2025-03-27 01:02:38.748204 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-27 01:02:38.748221 | orchestrator | Thursday 27 March 2025 01:00:53 +0000 (0:00:16.649) 0:02:04.012 ******** 2025-03-27 01:02:38.748237 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.748255 | orchestrator | 2025-03-27 01:02:38.748271 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-27 01:02:38.748286 | orchestrator | Thursday 27 March 2025 01:01:14 +0000 (0:00:20.594) 0:02:24.607 ******** 2025-03-27 01:02:38.748303 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.748319 | orchestrator | 2025-03-27 01:02:38.748335 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-03-27 01:02:38.748351 | orchestrator | 2025-03-27 01:02:38.748367 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-27 01:02:38.748383 | orchestrator | Thursday 27 March 2025 01:01:17 +0000 (0:00:02.921) 0:02:27.528 ******** 2025-03-27 01:02:38.748399 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:02:38.748416 | orchestrator | 2025-03-27 01:02:38.748432 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-27 01:02:38.748485 | orchestrator | Thursday 27 March 2025 01:01:33 +0000 (0:00:16.674) 0:02:44.203 ******** 2025-03-27 01:02:38.748502 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.748518 | orchestrator | 2025-03-27 01:02:38.748532 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-27 01:02:38.748546 | orchestrator | Thursday 27 March 2025 01:01:54 +0000 (0:00:20.588) 0:03:04.792 ******** 2025-03-27 01:02:38.748559 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.748573 | orchestrator | 2025-03-27 01:02:38.748587 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-03-27 01:02:38.748601 | orchestrator | 2025-03-27 01:02:38.748615 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-03-27 01:02:38.748629 | orchestrator | Thursday 27 March 2025 01:01:57 +0000 (0:00:02.816) 0:03:07.608 ******** 2025-03-27 01:02:38.748642 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.748656 | orchestrator | 2025-03-27 01:02:38.748670 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-03-27 01:02:38.748684 | orchestrator | Thursday 27 March 2025 01:02:17 +0000 (0:00:19.678) 0:03:27.286 ******** 2025-03-27 01:02:38.748697 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.748711 | orchestrator | 2025-03-27 01:02:38.748725 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-03-27 01:02:38.748739 | orchestrator | Thursday 27 March 2025 01:02:17 +0000 (0:00:00.558) 0:03:27.845 ******** 2025-03-27 01:02:38.748753 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.748767 | orchestrator | 2025-03-27 01:02:38.748780 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-03-27 01:02:38.748794 | orchestrator | 2025-03-27 01:02:38.748808 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-03-27 01:02:38.748821 | orchestrator | Thursday 27 March 2025 01:02:20 +0000 (0:00:02.949) 0:03:30.794 ******** 2025-03-27 01:02:38.748835 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:02:38.748849 | orchestrator | 2025-03-27 01:02:38.748863 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-03-27 01:02:38.748876 | orchestrator | Thursday 27 March 2025 01:02:21 +0000 (0:00:00.807) 0:03:31.602 ******** 2025-03-27 01:02:38.748890 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.748904 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.748918 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.748931 | orchestrator | 2025-03-27 01:02:38.748945 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-03-27 01:02:38.748959 | orchestrator | Thursday 27 March 2025 01:02:24 +0000 (0:00:02.758) 0:03:34.360 ******** 2025-03-27 01:02:38.748972 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.749067 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.749090 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.749104 | orchestrator | 2025-03-27 01:02:38.749124 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-03-27 01:02:38.749138 | orchestrator | Thursday 27 March 2025 01:02:26 +0000 (0:00:02.499) 0:03:36.860 ******** 2025-03-27 01:02:38.749152 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.749165 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.749179 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.749193 | orchestrator | 2025-03-27 01:02:38.749206 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-03-27 01:02:38.749220 | orchestrator | Thursday 27 March 2025 01:02:29 +0000 (0:00:02.808) 0:03:39.668 ******** 2025-03-27 01:02:38.749234 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.749247 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.749261 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:02:38.749275 | orchestrator | 2025-03-27 01:02:38.749289 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-03-27 01:02:38.749302 | orchestrator | Thursday 27 March 2025 01:02:31 +0000 (0:00:02.402) 0:03:42.070 ******** 2025-03-27 01:02:38.749324 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:02:38.749338 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:02:38.749352 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:02:38.749365 | orchestrator | 2025-03-27 01:02:38.749379 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-03-27 01:02:38.749393 | orchestrator | Thursday 27 March 2025 01:02:36 +0000 (0:00:04.209) 0:03:46.280 ******** 2025-03-27 01:02:38.749407 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:02:38.749421 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:02:38.749435 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:02:38.749467 | orchestrator | 2025-03-27 01:02:38.749482 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:02:38.749496 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-27 01:02:38.749511 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-03-27 01:02:38.749534 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-03-27 01:02:38.749684 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-03-27 01:02:38.749706 | orchestrator | 2025-03-27 01:02:38.749720 | orchestrator | 2025-03-27 01:02:38.749734 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:02:38.749747 | orchestrator | Thursday 27 March 2025 01:02:36 +0000 (0:00:00.466) 0:03:46.747 ******** 2025-03-27 01:02:38.749761 | orchestrator | =============================================================================== 2025-03-27 01:02:38.749775 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.18s 2025-03-27 01:02:38.749788 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.32s 2025-03-27 01:02:38.749802 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 20.13s 2025-03-27 01:02:38.749816 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 19.68s 2025-03-27 01:02:38.749830 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.02s 2025-03-27 01:02:38.749843 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.92s 2025-03-27 01:02:38.749857 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.15s 2025-03-27 01:02:38.749871 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 9.76s 2025-03-27 01:02:38.749884 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.28s 2025-03-27 01:02:38.749898 | orchestrator | mariadb : Copying over config.json files for services ------------------- 7.08s 2025-03-27 01:02:38.749911 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.74s 2025-03-27 01:02:38.749925 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.15s 2025-03-27 01:02:38.749939 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 5.00s 2025-03-27 01:02:38.749952 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.21s 2025-03-27 01:02:38.749966 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.95s 2025-03-27 01:02:38.749979 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.81s 2025-03-27 01:02:38.749993 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.77s 2025-03-27 01:02:38.750007 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.76s 2025-03-27 01:02:38.750061 | orchestrator | Check MariaDB service --------------------------------------------------- 2.55s 2025-03-27 01:02:38.750088 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.50s 2025-03-27 01:02:38.750102 | orchestrator | 2025-03-27 01:02:38 | INFO  | Task ea7ee138-f48b-45a8-845e-6c18f53dc8a6 is in state SUCCESS 2025-03-27 01:02:38.750122 | orchestrator | 2025-03-27 01:02:38 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:38.750137 | orchestrator | 2025-03-27 01:02:38 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:38.750151 | orchestrator | 2025-03-27 01:02:38 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:38.750165 | orchestrator | 2025-03-27 01:02:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:38.750185 | orchestrator | 2025-03-27 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:41.815795 | orchestrator | 2025-03-27 01:02:41 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:41.816604 | orchestrator | 2025-03-27 01:02:41 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:41.817761 | orchestrator | 2025-03-27 01:02:41 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:41.819160 | orchestrator | 2025-03-27 01:02:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:44.863269 | orchestrator | 2025-03-27 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:44.863405 | orchestrator | 2025-03-27 01:02:44 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:44.865338 | orchestrator | 2025-03-27 01:02:44 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:44.866803 | orchestrator | 2025-03-27 01:02:44 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:44.869121 | orchestrator | 2025-03-27 01:02:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:47.932110 | orchestrator | 2025-03-27 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:47.932221 | orchestrator | 2025-03-27 01:02:47 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:47.932839 | orchestrator | 2025-03-27 01:02:47 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:47.935213 | orchestrator | 2025-03-27 01:02:47 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:47.937845 | orchestrator | 2025-03-27 01:02:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:47.940049 | orchestrator | 2025-03-27 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:50.995171 | orchestrator | 2025-03-27 01:02:50 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:50.997448 | orchestrator | 2025-03-27 01:02:50 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:50.998574 | orchestrator | 2025-03-27 01:02:50 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:51.003896 | orchestrator | 2025-03-27 01:02:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:54.053504 | orchestrator | 2025-03-27 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:54.053638 | orchestrator | 2025-03-27 01:02:54 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:54.056116 | orchestrator | 2025-03-27 01:02:54 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:54.057244 | orchestrator | 2025-03-27 01:02:54 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:54.058672 | orchestrator | 2025-03-27 01:02:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:02:57.096197 | orchestrator | 2025-03-27 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:02:57.096307 | orchestrator | 2025-03-27 01:02:57 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:02:57.096634 | orchestrator | 2025-03-27 01:02:57 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:02:57.097557 | orchestrator | 2025-03-27 01:02:57 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:02:57.098802 | orchestrator | 2025-03-27 01:02:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:00.147315 | orchestrator | 2025-03-27 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:00.147527 | orchestrator | 2025-03-27 01:03:00 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:00.150144 | orchestrator | 2025-03-27 01:03:00 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:00.150714 | orchestrator | 2025-03-27 01:03:00 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:00.160148 | orchestrator | 2025-03-27 01:03:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:03.198592 | orchestrator | 2025-03-27 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:03.198722 | orchestrator | 2025-03-27 01:03:03 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:03.201157 | orchestrator | 2025-03-27 01:03:03 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:03.202164 | orchestrator | 2025-03-27 01:03:03 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:03.202588 | orchestrator | 2025-03-27 01:03:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:06.256837 | orchestrator | 2025-03-27 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:06.257082 | orchestrator | 2025-03-27 01:03:06 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:06.259062 | orchestrator | 2025-03-27 01:03:06 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:06.259097 | orchestrator | 2025-03-27 01:03:06 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:06.259975 | orchestrator | 2025-03-27 01:03:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:09.303139 | orchestrator | 2025-03-27 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:09.303269 | orchestrator | 2025-03-27 01:03:09 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:09.305410 | orchestrator | 2025-03-27 01:03:09 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:09.307600 | orchestrator | 2025-03-27 01:03:09 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:09.313990 | orchestrator | 2025-03-27 01:03:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:12.364754 | orchestrator | 2025-03-27 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:12.364898 | orchestrator | 2025-03-27 01:03:12 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:12.366253 | orchestrator | 2025-03-27 01:03:12 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:12.366292 | orchestrator | 2025-03-27 01:03:12 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:12.371848 | orchestrator | 2025-03-27 01:03:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:15.404748 | orchestrator | 2025-03-27 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:15.404886 | orchestrator | 2025-03-27 01:03:15 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:15.405175 | orchestrator | 2025-03-27 01:03:15 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:15.405698 | orchestrator | 2025-03-27 01:03:15 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:15.406542 | orchestrator | 2025-03-27 01:03:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:18.443721 | orchestrator | 2025-03-27 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:18.443967 | orchestrator | 2025-03-27 01:03:18 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:18.444952 | orchestrator | 2025-03-27 01:03:18 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:18.444990 | orchestrator | 2025-03-27 01:03:18 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:18.447439 | orchestrator | 2025-03-27 01:03:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:21.503788 | orchestrator | 2025-03-27 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:21.503875 | orchestrator | 2025-03-27 01:03:21 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:21.508017 | orchestrator | 2025-03-27 01:03:21 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:21.510167 | orchestrator | 2025-03-27 01:03:21 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:21.510187 | orchestrator | 2025-03-27 01:03:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:24.567684 | orchestrator | 2025-03-27 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:24.567811 | orchestrator | 2025-03-27 01:03:24 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:24.569616 | orchestrator | 2025-03-27 01:03:24 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:24.570766 | orchestrator | 2025-03-27 01:03:24 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:24.573150 | orchestrator | 2025-03-27 01:03:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:27.624740 | orchestrator | 2025-03-27 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:27.624869 | orchestrator | 2025-03-27 01:03:27 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:27.626711 | orchestrator | 2025-03-27 01:03:27 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:27.626750 | orchestrator | 2025-03-27 01:03:27 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:30.685488 | orchestrator | 2025-03-27 01:03:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:30.685587 | orchestrator | 2025-03-27 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:30.685607 | orchestrator | 2025-03-27 01:03:30 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:30.689227 | orchestrator | 2025-03-27 01:03:30 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:30.692716 | orchestrator | 2025-03-27 01:03:30 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:30.694211 | orchestrator | 2025-03-27 01:03:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:33.742639 | orchestrator | 2025-03-27 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:33.742765 | orchestrator | 2025-03-27 01:03:33 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:33.744677 | orchestrator | 2025-03-27 01:03:33 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:33.745897 | orchestrator | 2025-03-27 01:03:33 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:33.747893 | orchestrator | 2025-03-27 01:03:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:36.798567 | orchestrator | 2025-03-27 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:36.798704 | orchestrator | 2025-03-27 01:03:36 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:36.800180 | orchestrator | 2025-03-27 01:03:36 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:36.802885 | orchestrator | 2025-03-27 01:03:36 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:36.805362 | orchestrator | 2025-03-27 01:03:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:39.852258 | orchestrator | 2025-03-27 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:39.852388 | orchestrator | 2025-03-27 01:03:39 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:39.854875 | orchestrator | 2025-03-27 01:03:39 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:39.856987 | orchestrator | 2025-03-27 01:03:39 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:39.858389 | orchestrator | 2025-03-27 01:03:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:42.904784 | orchestrator | 2025-03-27 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:42.904927 | orchestrator | 2025-03-27 01:03:42 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:42.911361 | orchestrator | 2025-03-27 01:03:42 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:45.978600 | orchestrator | 2025-03-27 01:03:42 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:45.978735 | orchestrator | 2025-03-27 01:03:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:45.978761 | orchestrator | 2025-03-27 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:45.978804 | orchestrator | 2025-03-27 01:03:45 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:45.980403 | orchestrator | 2025-03-27 01:03:45 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:45.982384 | orchestrator | 2025-03-27 01:03:45 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:45.985714 | orchestrator | 2025-03-27 01:03:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:49.045617 | orchestrator | 2025-03-27 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:49.045766 | orchestrator | 2025-03-27 01:03:49 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:49.046969 | orchestrator | 2025-03-27 01:03:49 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:49.048193 | orchestrator | 2025-03-27 01:03:49 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:49.051178 | orchestrator | 2025-03-27 01:03:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:52.109158 | orchestrator | 2025-03-27 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:52.109292 | orchestrator | 2025-03-27 01:03:52 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:52.110752 | orchestrator | 2025-03-27 01:03:52 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:52.112190 | orchestrator | 2025-03-27 01:03:52 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:52.114102 | orchestrator | 2025-03-27 01:03:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:55.164630 | orchestrator | 2025-03-27 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:55.164764 | orchestrator | 2025-03-27 01:03:55 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:55.166528 | orchestrator | 2025-03-27 01:03:55 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:55.166582 | orchestrator | 2025-03-27 01:03:55 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:55.167996 | orchestrator | 2025-03-27 01:03:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:03:58.218525 | orchestrator | 2025-03-27 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:03:58.218663 | orchestrator | 2025-03-27 01:03:58 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:03:58.220793 | orchestrator | 2025-03-27 01:03:58 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:03:58.223282 | orchestrator | 2025-03-27 01:03:58 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:03:58.225066 | orchestrator | 2025-03-27 01:03:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:01.276422 | orchestrator | 2025-03-27 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:01.276616 | orchestrator | 2025-03-27 01:04:01 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:01.278661 | orchestrator | 2025-03-27 01:04:01 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:04:01.279714 | orchestrator | 2025-03-27 01:04:01 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:01.282921 | orchestrator | 2025-03-27 01:04:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:04.339615 | orchestrator | 2025-03-27 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:04.339744 | orchestrator | 2025-03-27 01:04:04 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:04.344180 | orchestrator | 2025-03-27 01:04:04 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:04:04.345860 | orchestrator | 2025-03-27 01:04:04 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:04.347501 | orchestrator | 2025-03-27 01:04:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:07.421442 | orchestrator | 2025-03-27 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:07.421621 | orchestrator | 2025-03-27 01:04:07 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:07.424415 | orchestrator | 2025-03-27 01:04:07 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:04:07.426256 | orchestrator | 2025-03-27 01:04:07 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:07.428321 | orchestrator | 2025-03-27 01:04:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:10.486776 | orchestrator | 2025-03-27 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:10.486900 | orchestrator | 2025-03-27 01:04:10 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:10.491433 | orchestrator | 2025-03-27 01:04:10 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state STARTED 2025-03-27 01:04:13.540589 | orchestrator | 2025-03-27 01:04:10 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:13.540701 | orchestrator | 2025-03-27 01:04:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:13.540718 | orchestrator | 2025-03-27 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:13.540748 | orchestrator | 2025-03-27 01:04:13 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:13.542148 | orchestrator | 2025-03-27 01:04:13 | INFO  | Task b1d09e47-33e2-427f-907d-afb7a8249536 is in state SUCCESS 2025-03-27 01:04:13.543968 | orchestrator | 2025-03-27 01:04:13.544005 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 01:04:13.544089 | orchestrator | 2025-03-27 01:04:13.544109 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-03-27 01:04:13.544124 | orchestrator | 2025-03-27 01:04:13.544138 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-27 01:04:13.544152 | orchestrator | Thursday 27 March 2025 01:01:59 +0000 (0:00:01.106) 0:00:01.106 ******** 2025-03-27 01:04:13.544502 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:04:13.544529 | orchestrator | 2025-03-27 01:04:13.544544 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-27 01:04:13.544558 | orchestrator | Thursday 27 March 2025 01:02:00 +0000 (0:00:00.520) 0:00:01.627 ******** 2025-03-27 01:04:13.544573 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-03-27 01:04:13.544587 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-03-27 01:04:13.544602 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-03-27 01:04:13.544615 | orchestrator | 2025-03-27 01:04:13.544629 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-27 01:04:13.544643 | orchestrator | Thursday 27 March 2025 01:02:01 +0000 (0:00:01.005) 0:00:02.633 ******** 2025-03-27 01:04:13.544657 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:04:13.544671 | orchestrator | 2025-03-27 01:04:13.544685 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-27 01:04:13.544722 | orchestrator | Thursday 27 March 2025 01:02:02 +0000 (0:00:00.792) 0:00:03.425 ******** 2025-03-27 01:04:13.544736 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.544751 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.544765 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.544779 | orchestrator | 2025-03-27 01:04:13.544793 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-27 01:04:13.544807 | orchestrator | Thursday 27 March 2025 01:02:03 +0000 (0:00:00.761) 0:00:04.187 ******** 2025-03-27 01:04:13.544820 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.544835 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.544848 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.544862 | orchestrator | 2025-03-27 01:04:13.544876 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-27 01:04:13.544890 | orchestrator | Thursday 27 March 2025 01:02:03 +0000 (0:00:00.326) 0:00:04.513 ******** 2025-03-27 01:04:13.544903 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.544917 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.544930 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.544944 | orchestrator | 2025-03-27 01:04:13.544957 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-27 01:04:13.544971 | orchestrator | Thursday 27 March 2025 01:02:04 +0000 (0:00:01.002) 0:00:05.516 ******** 2025-03-27 01:04:13.544985 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.544999 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.545012 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.545026 | orchestrator | 2025-03-27 01:04:13.545040 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-27 01:04:13.545054 | orchestrator | Thursday 27 March 2025 01:02:04 +0000 (0:00:00.367) 0:00:05.884 ******** 2025-03-27 01:04:13.545067 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.545081 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.545097 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.545113 | orchestrator | 2025-03-27 01:04:13.545129 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-27 01:04:13.545145 | orchestrator | Thursday 27 March 2025 01:02:05 +0000 (0:00:00.303) 0:00:06.187 ******** 2025-03-27 01:04:13.545161 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.545190 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.545204 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.545217 | orchestrator | 2025-03-27 01:04:13.545231 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-27 01:04:13.545251 | orchestrator | Thursday 27 March 2025 01:02:05 +0000 (0:00:00.388) 0:00:06.576 ******** 2025-03-27 01:04:13.545265 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.545280 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.545293 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.545307 | orchestrator | 2025-03-27 01:04:13.545321 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-27 01:04:13.545335 | orchestrator | Thursday 27 March 2025 01:02:06 +0000 (0:00:00.563) 0:00:07.140 ******** 2025-03-27 01:04:13.545349 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.545363 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.545377 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.545390 | orchestrator | 2025-03-27 01:04:13.545404 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-27 01:04:13.545418 | orchestrator | Thursday 27 March 2025 01:02:06 +0000 (0:00:00.385) 0:00:07.525 ******** 2025-03-27 01:04:13.545432 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-27 01:04:13.545446 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:13.545460 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:13.545509 | orchestrator | 2025-03-27 01:04:13.545524 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-27 01:04:13.545546 | orchestrator | Thursday 27 March 2025 01:02:07 +0000 (0:00:00.801) 0:00:08.326 ******** 2025-03-27 01:04:13.545560 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.545573 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.545669 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.545685 | orchestrator | 2025-03-27 01:04:13.545705 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-27 01:04:13.545720 | orchestrator | Thursday 27 March 2025 01:02:07 +0000 (0:00:00.500) 0:00:08.827 ******** 2025-03-27 01:04:13.545743 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-27 01:04:13.545758 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:13.545772 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:13.545786 | orchestrator | 2025-03-27 01:04:13.545799 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-27 01:04:13.545813 | orchestrator | Thursday 27 March 2025 01:02:10 +0000 (0:00:02.463) 0:00:11.291 ******** 2025-03-27 01:04:13.545827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:04:13.545840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:04:13.545855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:04:13.545869 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.545883 | orchestrator | 2025-03-27 01:04:13.545897 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-27 01:04:13.545910 | orchestrator | Thursday 27 March 2025 01:02:10 +0000 (0:00:00.482) 0:00:11.773 ******** 2025-03-27 01:04:13.545925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.545942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.545957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.545971 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.545985 | orchestrator | 2025-03-27 01:04:13.545999 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-27 01:04:13.546013 | orchestrator | Thursday 27 March 2025 01:02:11 +0000 (0:00:00.707) 0:00:12.481 ******** 2025-03-27 01:04:13.546079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.546095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.546110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:13.546134 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546148 | orchestrator | 2025-03-27 01:04:13.546162 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-27 01:04:13.546176 | orchestrator | Thursday 27 March 2025 01:02:11 +0000 (0:00:00.175) 0:00:12.656 ******** 2025-03-27 01:04:13.546193 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a90b4449bff6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-27 01:02:08.684633', 'end': '2025-03-27 01:02:08.732871', 'delta': '0:00:00.048238', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a90b4449bff6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:13.546224 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'e5e85aecd111', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-27 01:02:09.307652', 'end': '2025-03-27 01:02:09.347732', 'delta': '0:00:00.040080', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5e85aecd111'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:13.546241 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a80c9b827c1e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-27 01:02:09.883630', 'end': '2025-03-27 01:02:09.914671', 'delta': '0:00:00.031041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a80c9b827c1e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:13.546256 | orchestrator | 2025-03-27 01:04:13.546270 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-27 01:04:13.546284 | orchestrator | Thursday 27 March 2025 01:02:11 +0000 (0:00:00.207) 0:00:12.863 ******** 2025-03-27 01:04:13.546299 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.546314 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.546330 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.546346 | orchestrator | 2025-03-27 01:04:13.546361 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-27 01:04:13.546376 | orchestrator | Thursday 27 March 2025 01:02:12 +0000 (0:00:00.495) 0:00:13.359 ******** 2025-03-27 01:04:13.546392 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-03-27 01:04:13.546407 | orchestrator | 2025-03-27 01:04:13.546422 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-27 01:04:13.546438 | orchestrator | Thursday 27 March 2025 01:02:13 +0000 (0:00:01.515) 0:00:14.875 ******** 2025-03-27 01:04:13.546453 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546491 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.546508 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.546523 | orchestrator | 2025-03-27 01:04:13.546539 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-27 01:04:13.546562 | orchestrator | Thursday 27 March 2025 01:02:14 +0000 (0:00:00.536) 0:00:15.411 ******** 2025-03-27 01:04:13.546578 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546594 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.546609 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.546625 | orchestrator | 2025-03-27 01:04:13.546640 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:04:13.546655 | orchestrator | Thursday 27 March 2025 01:02:14 +0000 (0:00:00.495) 0:00:15.907 ******** 2025-03-27 01:04:13.546668 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546682 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.546696 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.546709 | orchestrator | 2025-03-27 01:04:13.546723 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-27 01:04:13.546737 | orchestrator | Thursday 27 March 2025 01:02:15 +0000 (0:00:00.359) 0:00:16.267 ******** 2025-03-27 01:04:13.546750 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.546764 | orchestrator | 2025-03-27 01:04:13.546778 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-27 01:04:13.546792 | orchestrator | Thursday 27 March 2025 01:02:15 +0000 (0:00:00.137) 0:00:16.404 ******** 2025-03-27 01:04:13.546805 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546819 | orchestrator | 2025-03-27 01:04:13.546833 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:04:13.546846 | orchestrator | Thursday 27 March 2025 01:02:15 +0000 (0:00:00.294) 0:00:16.699 ******** 2025-03-27 01:04:13.546860 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546874 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.546887 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.546901 | orchestrator | 2025-03-27 01:04:13.546915 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-27 01:04:13.546928 | orchestrator | Thursday 27 March 2025 01:02:16 +0000 (0:00:00.532) 0:00:17.231 ******** 2025-03-27 01:04:13.546942 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.546956 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.546970 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.546983 | orchestrator | 2025-03-27 01:04:13.546997 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-27 01:04:13.547016 | orchestrator | Thursday 27 March 2025 01:02:16 +0000 (0:00:00.360) 0:00:17.592 ******** 2025-03-27 01:04:13.547030 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547044 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.547057 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.547071 | orchestrator | 2025-03-27 01:04:13.547084 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-27 01:04:13.547098 | orchestrator | Thursday 27 March 2025 01:02:16 +0000 (0:00:00.348) 0:00:17.941 ******** 2025-03-27 01:04:13.547112 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547126 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.547145 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.547160 | orchestrator | 2025-03-27 01:04:13.547174 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-27 01:04:13.547188 | orchestrator | Thursday 27 March 2025 01:02:17 +0000 (0:00:00.387) 0:00:18.328 ******** 2025-03-27 01:04:13.547202 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547215 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.547229 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.547242 | orchestrator | 2025-03-27 01:04:13.547256 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-27 01:04:13.547270 | orchestrator | Thursday 27 March 2025 01:02:17 +0000 (0:00:00.589) 0:00:18.918 ******** 2025-03-27 01:04:13.547283 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547297 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.547317 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.547331 | orchestrator | 2025-03-27 01:04:13.547345 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-27 01:04:13.547359 | orchestrator | Thursday 27 March 2025 01:02:18 +0000 (0:00:00.354) 0:00:19.273 ******** 2025-03-27 01:04:13.547373 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547386 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.547400 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.547413 | orchestrator | 2025-03-27 01:04:13.547427 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-27 01:04:13.547440 | orchestrator | Thursday 27 March 2025 01:02:18 +0000 (0:00:00.326) 0:00:19.600 ******** 2025-03-27 01:04:13.547455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5e2bf155--ac50--562d--a3fc--a4d9038fe730-osd--block--5e2bf155--ac50--562d--a3fc--a4d9038fe730', 'dm-uuid-LVM-QA8Lq98hT0WrqvFZAwwmxLAQKDG9xLcdqmUJYcccF1Xf9DZu8JzS7iQDk0QMlG2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d321ea45--1a00--5698--8092--45c793cb3b8c-osd--block--d321ea45--1a00--5698--8092--45c793cb3b8c', 'dm-uuid-LVM-sgMXA1eJjWzofV27oOT5zNkGmgY2I1NJ2S6grLdwwQsM2iwC23SpFJc8NuP5WZfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac5892bc-50dc-4a75-a426-a457b05ebd21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.547696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bac76156--9f65--5e37--8447--16c40269f5cf-osd--block--bac76156--9f65--5e37--8447--16c40269f5cf', 'dm-uuid-LVM-cLquHM6cTtxcfmF0FIJtGaa5SY2WsbrQzYjdnqtOmEzrnBmaHUUcOAuMpvl1kC4q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5e2bf155--ac50--562d--a3fc--a4d9038fe730-osd--block--5e2bf155--ac50--562d--a3fc--a4d9038fe730'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lf6Yge-HAyn-0DtL-eRlI-G2Y8-DOpx-0CFKlG', 'scsi-0QEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9', 'scsi-SQEMU_QEMU_HARDDISK_3a3b00e3-da7a-4c3b-8b0c-ab011795b6c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.547734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b-osd--block--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b', 'dm-uuid-LVM-acuslDl7ym18pYJdSP1LtxEkeZilUcCsw10HMf7X50fbeph6IfiESSzRGBGbxoce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d321ea45--1a00--5698--8092--45c793cb3b8c-osd--block--d321ea45--1a00--5698--8092--45c793cb3b8c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZHmqh2-AoUZ-coXE-4raU-G2ju-gAl6-S8I80b', 'scsi-0QEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7', 'scsi-SQEMU_QEMU_HARDDISK_1a89a9ff-44c1-4404-a46c-604e790c64d7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.547771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23', 'scsi-SQEMU_QEMU_HARDDISK_874d53e3-fb17-4b5b-8e0b-b33da9e1cc23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.547801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.547829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547858 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.547872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--923c5540--3b69--54d6--b090--bccde0d698f1-osd--block--923c5540--3b69--54d6--b090--bccde0d698f1', 'dm-uuid-LVM-II044VSc7qX0zAykm1N1e47StvtKMHOQfYefWyYdcT1XJKoLgemSD2EMuRphzNjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8acd0346--cc61--560a--be8a--825f05553edd-osd--block--8acd0346--cc61--560a--be8a--825f05553edd', 'dm-uuid-LVM-byoLOTJpo7zdj83o1Q3TMwLiBG8164KG8yqfFlinH5MBI91EtSnlyxHzZgT9GR14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.547998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part1', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part14', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part15', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part16', 'scsi-SQEMU_QEMU_HARDDISK_80403e93-bd3e-4884-b247-e0291e0a6666-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bac76156--9f65--5e37--8447--16c40269f5cf-osd--block--bac76156--9f65--5e37--8447--16c40269f5cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3HdPK-IFGM-nunJ-uK6V-IehT-ZxL4-QT0Qr2', 'scsi-0QEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8', 'scsi-SQEMU_QEMU_HARDDISK_3b62db4a-d9c9-4dee-909c-fb2dda9345a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b-osd--block--cb3edc0f--ef8f--5bb1--94d3--58e33ab1473b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FNHXoR-F0L1-xpbb-GM1d-Larw-nc1G-0enZLi', 'scsi-0QEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a', 'scsi-SQEMU_QEMU_HARDDISK_5498cf3d-971d-4d04-a26e-caa954b0ff0a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10', 'scsi-SQEMU_QEMU_HARDDISK_a8735590-8c0d-455a-9e36-1ed693cbdd10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548167 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.548182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:13.548243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part1', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part14', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part15', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part16', 'scsi-SQEMU_QEMU_HARDDISK_5542f5ea-ae93-4dfe-9922-9cc923bfb807-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--923c5540--3b69--54d6--b090--bccde0d698f1-osd--block--923c5540--3b69--54d6--b090--bccde0d698f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFheq2-iyOV-C2Ex-oYS9-FfkI-yCv5-qAnX1b', 'scsi-0QEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac', 'scsi-SQEMU_QEMU_HARDDISK_a6b08226-ae04-4ebb-8f92-51d42c32f5ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8acd0346--cc61--560a--be8a--825f05553edd-osd--block--8acd0346--cc61--560a--be8a--825f05553edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f3Nqpq-QzCM-Ycoj-awYo-cA9E-Eiz4-XTKewp', 'scsi-0QEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d', 'scsi-SQEMU_QEMU_HARDDISK_3ba6755c-983a-4f3d-8d53-7abda8c22d5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8', 'scsi-SQEMU_QEMU_HARDDISK_0b86602b-3b4a-4669-b84e-8d0be08a4eb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:13.548343 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.548360 | orchestrator | 2025-03-27 01:04:13.548374 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-27 01:04:13.548388 | orchestrator | Thursday 27 March 2025 01:02:19 +0000 (0:00:00.625) 0:00:20.225 ******** 2025-03-27 01:04:13.548402 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-03-27 01:04:13.548416 | orchestrator | 2025-03-27 01:04:13.548430 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-27 01:04:13.548444 | orchestrator | Thursday 27 March 2025 01:02:20 +0000 (0:00:01.594) 0:00:21.820 ******** 2025-03-27 01:04:13.548458 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.548490 | orchestrator | 2025-03-27 01:04:13.548504 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-27 01:04:13.548518 | orchestrator | Thursday 27 March 2025 01:02:20 +0000 (0:00:00.172) 0:00:21.993 ******** 2025-03-27 01:04:13.548533 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.548547 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.548561 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.548575 | orchestrator | 2025-03-27 01:04:13.548589 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-27 01:04:13.548603 | orchestrator | Thursday 27 March 2025 01:02:21 +0000 (0:00:00.393) 0:00:22.387 ******** 2025-03-27 01:04:13.548617 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.548630 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.548644 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.548658 | orchestrator | 2025-03-27 01:04:13.548672 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-27 01:04:13.548685 | orchestrator | Thursday 27 March 2025 01:02:22 +0000 (0:00:00.731) 0:00:23.118 ******** 2025-03-27 01:04:13.548699 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.548713 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.548727 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.548741 | orchestrator | 2025-03-27 01:04:13.548754 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:04:13.548777 | orchestrator | Thursday 27 March 2025 01:02:22 +0000 (0:00:00.311) 0:00:23.430 ******** 2025-03-27 01:04:13.548791 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.548805 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.548818 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.548832 | orchestrator | 2025-03-27 01:04:13.548846 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:04:13.548860 | orchestrator | Thursday 27 March 2025 01:02:23 +0000 (0:00:00.948) 0:00:24.378 ******** 2025-03-27 01:04:13.548874 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.548888 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.548901 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.548915 | orchestrator | 2025-03-27 01:04:13.548929 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:04:13.548943 | orchestrator | Thursday 27 March 2025 01:02:23 +0000 (0:00:00.313) 0:00:24.692 ******** 2025-03-27 01:04:13.548956 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.548970 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.548984 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.548998 | orchestrator | 2025-03-27 01:04:13.549011 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:04:13.549025 | orchestrator | Thursday 27 March 2025 01:02:24 +0000 (0:00:00.447) 0:00:25.139 ******** 2025-03-27 01:04:13.549039 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.549053 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.549067 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.549081 | orchestrator | 2025-03-27 01:04:13.549094 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-27 01:04:13.549108 | orchestrator | Thursday 27 March 2025 01:02:24 +0000 (0:00:00.302) 0:00:25.442 ******** 2025-03-27 01:04:13.549122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:04:13.549136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:04:13.549150 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:04:13.549163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:04:13.549177 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.549196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:04:13.549210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:04:13.549224 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:04:13.549237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:04:13.549251 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.549265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:04:13.549279 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.549293 | orchestrator | 2025-03-27 01:04:13.549307 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-27 01:04:13.549331 | orchestrator | Thursday 27 March 2025 01:02:25 +0000 (0:00:00.999) 0:00:26.441 ******** 2025-03-27 01:04:13.549345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:04:13.549359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:04:13.549373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:04:13.549387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:04:13.549401 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.549415 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:04:13.549428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:04:13.549442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:04:13.549455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:04:13.549517 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.549540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:04:13.549554 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.549566 | orchestrator | 2025-03-27 01:04:13.549579 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-27 01:04:13.549591 | orchestrator | Thursday 27 March 2025 01:02:26 +0000 (0:00:00.804) 0:00:27.246 ******** 2025-03-27 01:04:13.549603 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-03-27 01:04:13.549616 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-03-27 01:04:13.549628 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-03-27 01:04:13.549640 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-03-27 01:04:13.549652 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-03-27 01:04:13.549665 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-03-27 01:04:13.549677 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-03-27 01:04:13.549689 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-03-27 01:04:13.549701 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-03-27 01:04:13.549713 | orchestrator | 2025-03-27 01:04:13.549725 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-27 01:04:13.549737 | orchestrator | Thursday 27 March 2025 01:02:27 +0000 (0:00:01.845) 0:00:29.091 ******** 2025-03-27 01:04:13.549750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:04:13.549762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:04:13.549774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:04:13.549786 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:04:13.549798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:04:13.549810 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:04:13.549822 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.549834 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.549847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:04:13.549859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:04:13.549871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:04:13.549883 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.549895 | orchestrator | 2025-03-27 01:04:13.549907 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-27 01:04:13.549920 | orchestrator | Thursday 27 March 2025 01:02:28 +0000 (0:00:00.660) 0:00:29.752 ******** 2025-03-27 01:04:13.549932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-27 01:04:13.549944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-27 01:04:13.549956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-27 01:04:13.549968 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-27 01:04:13.549981 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.549993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-27 01:04:13.550005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-27 01:04:13.550041 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.550056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-27 01:04:13.550068 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-27 01:04:13.550080 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-27 01:04:13.550092 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.550104 | orchestrator | 2025-03-27 01:04:13.550116 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-27 01:04:13.550129 | orchestrator | Thursday 27 March 2025 01:02:29 +0000 (0:00:00.410) 0:00:30.163 ******** 2025-03-27 01:04:13.550141 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:04:13.550166 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:04:13.550179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:04:13.550196 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:04:13.550209 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:04:13.550221 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:04:13.550233 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550246 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.550258 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-03-27 01:04:13.550277 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:04:13.550290 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:04:13.550303 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.550315 | orchestrator | 2025-03-27 01:04:13.550327 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-27 01:04:13.550340 | orchestrator | Thursday 27 March 2025 01:02:29 +0000 (0:00:00.423) 0:00:30.586 ******** 2025-03-27 01:04:13.550352 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:04:13.550364 | orchestrator | 2025-03-27 01:04:13.550376 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-03-27 01:04:13.550389 | orchestrator | Thursday 27 March 2025 01:02:30 +0000 (0:00:00.777) 0:00:31.364 ******** 2025-03-27 01:04:13.550401 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550413 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.550425 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.550437 | orchestrator | 2025-03-27 01:04:13.550449 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-03-27 01:04:13.550479 | orchestrator | Thursday 27 March 2025 01:02:30 +0000 (0:00:00.469) 0:00:31.834 ******** 2025-03-27 01:04:13.550492 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550504 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.550516 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.550529 | orchestrator | 2025-03-27 01:04:13.550541 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-03-27 01:04:13.550553 | orchestrator | Thursday 27 March 2025 01:02:31 +0000 (0:00:00.351) 0:00:32.185 ******** 2025-03-27 01:04:13.550565 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550577 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.550590 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.550602 | orchestrator | 2025-03-27 01:04:13.550614 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-03-27 01:04:13.550626 | orchestrator | Thursday 27 March 2025 01:02:31 +0000 (0:00:00.363) 0:00:32.548 ******** 2025-03-27 01:04:13.550638 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.550651 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.550663 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.550675 | orchestrator | 2025-03-27 01:04:13.550688 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-03-27 01:04:13.550700 | orchestrator | Thursday 27 March 2025 01:02:32 +0000 (0:00:00.710) 0:00:33.259 ******** 2025-03-27 01:04:13.550712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:04:13.550725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:04:13.550737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:04:13.550749 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550773 | orchestrator | 2025-03-27 01:04:13.550785 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-03-27 01:04:13.550797 | orchestrator | Thursday 27 March 2025 01:02:32 +0000 (0:00:00.376) 0:00:33.635 ******** 2025-03-27 01:04:13.550809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:04:13.550822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:04:13.550838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:04:13.550851 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550863 | orchestrator | 2025-03-27 01:04:13.550875 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-03-27 01:04:13.550888 | orchestrator | Thursday 27 March 2025 01:02:32 +0000 (0:00:00.421) 0:00:34.057 ******** 2025-03-27 01:04:13.550900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:04:13.550912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:04:13.550924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:04:13.550936 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.550948 | orchestrator | 2025-03-27 01:04:13.550960 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:04:13.550972 | orchestrator | Thursday 27 March 2025 01:02:33 +0000 (0:00:00.511) 0:00:34.569 ******** 2025-03-27 01:04:13.550984 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:04:13.550997 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:04:13.551009 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:04:13.551021 | orchestrator | 2025-03-27 01:04:13.551033 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-03-27 01:04:13.551045 | orchestrator | Thursday 27 March 2025 01:02:33 +0000 (0:00:00.514) 0:00:35.083 ******** 2025-03-27 01:04:13.551057 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-03-27 01:04:13.551070 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-03-27 01:04:13.551082 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-03-27 01:04:13.551094 | orchestrator | 2025-03-27 01:04:13.551106 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-03-27 01:04:13.551118 | orchestrator | Thursday 27 March 2025 01:02:35 +0000 (0:00:01.340) 0:00:36.424 ******** 2025-03-27 01:04:13.551131 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551143 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551155 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551167 | orchestrator | 2025-03-27 01:04:13.551179 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-03-27 01:04:13.551191 | orchestrator | Thursday 27 March 2025 01:02:35 +0000 (0:00:00.354) 0:00:36.778 ******** 2025-03-27 01:04:13.551204 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551216 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551228 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551240 | orchestrator | 2025-03-27 01:04:13.551252 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-03-27 01:04:13.551269 | orchestrator | Thursday 27 March 2025 01:02:36 +0000 (0:00:00.344) 0:00:37.123 ******** 2025-03-27 01:04:13.551282 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-03-27 01:04:13.551294 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551307 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-03-27 01:04:13.551319 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551331 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-03-27 01:04:13.551343 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551355 | orchestrator | 2025-03-27 01:04:13.551367 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-03-27 01:04:13.551380 | orchestrator | Thursday 27 March 2025 01:02:36 +0000 (0:00:00.484) 0:00:37.607 ******** 2025-03-27 01:04:13.551392 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-03-27 01:04:13.551410 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551422 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-03-27 01:04:13.551435 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551447 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-03-27 01:04:13.551459 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551485 | orchestrator | 2025-03-27 01:04:13.551498 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-03-27 01:04:13.551510 | orchestrator | Thursday 27 March 2025 01:02:37 +0000 (0:00:00.602) 0:00:38.209 ******** 2025-03-27 01:04:13.551523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-27 01:04:13.551535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-27 01:04:13.551547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-27 01:04:13.551559 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-27 01:04:13.551583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-27 01:04:13.551595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-27 01:04:13.551607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-27 01:04:13.551620 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-27 01:04:13.551644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-27 01:04:13.551656 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551668 | orchestrator | 2025-03-27 01:04:13.551681 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-27 01:04:13.551693 | orchestrator | Thursday 27 March 2025 01:02:37 +0000 (0:00:00.795) 0:00:39.004 ******** 2025-03-27 01:04:13.551705 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.551717 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.551729 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:04:13.551741 | orchestrator | 2025-03-27 01:04:13.551753 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-27 01:04:13.551766 | orchestrator | Thursday 27 March 2025 01:02:38 +0000 (0:00:00.356) 0:00:39.361 ******** 2025-03-27 01:04:13.551778 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-27 01:04:13.551790 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:13.551802 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:13.551814 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-03-27 01:04:13.551827 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:04:13.551839 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:04:13.551851 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:04:13.551863 | orchestrator | 2025-03-27 01:04:13.551875 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-27 01:04:13.551888 | orchestrator | Thursday 27 March 2025 01:02:39 +0000 (0:00:01.113) 0:00:40.475 ******** 2025-03-27 01:04:13.551900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-03-27 01:04:13.551912 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:13.551924 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:13.551936 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-03-27 01:04:13.551949 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:04:13.551966 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:04:13.551978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:04:13.551990 | orchestrator | 2025-03-27 01:04:13.552003 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-03-27 01:04:13.552015 | orchestrator | Thursday 27 March 2025 01:02:41 +0000 (0:00:02.068) 0:00:42.544 ******** 2025-03-27 01:04:13.552027 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:04:13.552040 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:04:13.552052 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-03-27 01:04:13.552064 | orchestrator | 2025-03-27 01:04:13.552077 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-03-27 01:04:13.552094 | orchestrator | Thursday 27 March 2025 01:02:42 +0000 (0:00:00.630) 0:00:43.174 ******** 2025-03-27 01:04:13.552108 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:04:13.552122 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:04:13.552135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:04:13.552148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:04:13.552160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-03-27 01:04:13.552173 | orchestrator | 2025-03-27 01:04:13.552185 | orchestrator | TASK [generate keys] *********************************************************** 2025-03-27 01:04:13.552197 | orchestrator | Thursday 27 March 2025 01:03:22 +0000 (0:00:40.002) 0:01:23.177 ******** 2025-03-27 01:04:13.552210 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552226 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552239 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552263 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552288 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-03-27 01:04:13.552300 | orchestrator | 2025-03-27 01:04:13.552313 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-03-27 01:04:13.552325 | orchestrator | Thursday 27 March 2025 01:03:42 +0000 (0:00:20.560) 0:01:43.737 ******** 2025-03-27 01:04:13.552337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552355 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552367 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552379 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552392 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552404 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552416 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-03-27 01:04:13.552428 | orchestrator | 2025-03-27 01:04:13.552440 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-03-27 01:04:13.552452 | orchestrator | Thursday 27 March 2025 01:03:53 +0000 (0:00:10.590) 0:01:54.328 ******** 2025-03-27 01:04:13.552502 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552516 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:13.552529 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:13.552541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552553 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:13.552566 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:13.552578 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552590 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:13.552602 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:13.552615 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:13.552627 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:13.552644 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:16.605622 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:16.605735 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:16.605751 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:16.605763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-03-27 01:04:16.605775 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-03-27 01:04:16.605800 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-03-27 01:04:16.605813 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-03-27 01:04:16.605824 | orchestrator | 2025-03-27 01:04:16.605837 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:04:16.605850 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-27 01:04:16.605862 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-03-27 01:04:16.605878 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-03-27 01:04:16.605889 | orchestrator | 2025-03-27 01:04:16.605901 | orchestrator | 2025-03-27 01:04:16.605912 | orchestrator | 2025-03-27 01:04:16.605923 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:04:16.605934 | orchestrator | Thursday 27 March 2025 01:04:12 +0000 (0:00:19.277) 0:02:13.605 ******** 2025-03-27 01:04:16.605945 | orchestrator | =============================================================================== 2025-03-27 01:04:16.605976 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.00s 2025-03-27 01:04:16.605987 | orchestrator | generate keys ---------------------------------------------------------- 20.56s 2025-03-27 01:04:16.605998 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.28s 2025-03-27 01:04:16.606009 | orchestrator | get keys from monitors ------------------------------------------------- 10.59s 2025-03-27 01:04:16.606063 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.46s 2025-03-27 01:04:16.606075 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.07s 2025-03-27 01:04:16.606086 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.85s 2025-03-27 01:04:16.606097 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.59s 2025-03-27 01:04:16.606108 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.52s 2025-03-27 01:04:16.606119 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.34s 2025-03-27 01:04:16.606130 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.11s 2025-03-27 01:04:16.606141 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.01s 2025-03-27 01:04:16.606152 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 1.00s 2025-03-27 01:04:16.606163 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.00s 2025-03-27 01:04:16.606175 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.95s 2025-03-27 01:04:16.606186 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.80s 2025-03-27 01:04:16.606197 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.80s 2025-03-27 01:04:16.606208 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.80s 2025-03-27 01:04:16.606219 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.79s 2025-03-27 01:04:16.606233 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.78s 2025-03-27 01:04:16.606428 | orchestrator | 2025-03-27 01:04:13 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:16.606444 | orchestrator | 2025-03-27 01:04:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:16.606458 | orchestrator | 2025-03-27 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:16.606509 | orchestrator | 2025-03-27 01:04:16 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:16.607165 | orchestrator | 2025-03-27 01:04:16 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state STARTED 2025-03-27 01:04:16.607187 | orchestrator | 2025-03-27 01:04:16 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:16.607204 | orchestrator | 2025-03-27 01:04:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:19.658266 | orchestrator | 2025-03-27 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:19.659062 | orchestrator | 2025-03-27 01:04:19 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:19.660318 | orchestrator | 2025-03-27 01:04:19 | INFO  | Task bc488b23-b51f-4154-8e4d-f28fbf4ae81f is in state SUCCESS 2025-03-27 01:04:19.661591 | orchestrator | 2025-03-27 01:04:19.661660 | orchestrator | 2025-03-27 01:04:19.661676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:04:19.661756 | orchestrator | 2025-03-27 01:04:19.661775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:04:19.661790 | orchestrator | Thursday 27 March 2025 01:02:40 +0000 (0:00:00.515) 0:00:00.515 ******** 2025-03-27 01:04:19.661826 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.662069 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.662091 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.662105 | orchestrator | 2025-03-27 01:04:19.662119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:04:19.662134 | orchestrator | Thursday 27 March 2025 01:02:40 +0000 (0:00:00.487) 0:00:01.002 ******** 2025-03-27 01:04:19.662148 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-03-27 01:04:19.662162 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-03-27 01:04:19.662176 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-03-27 01:04:19.662189 | orchestrator | 2025-03-27 01:04:19.662203 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-03-27 01:04:19.662217 | orchestrator | 2025-03-27 01:04:19.662231 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-27 01:04:19.662259 | orchestrator | Thursday 27 March 2025 01:02:41 +0000 (0:00:00.359) 0:00:01.361 ******** 2025-03-27 01:04:19.662274 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:04:19.662289 | orchestrator | 2025-03-27 01:04:19.662303 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-03-27 01:04:19.662317 | orchestrator | Thursday 27 March 2025 01:02:42 +0000 (0:00:00.789) 0:00:02.151 ******** 2025-03-27 01:04:19.662336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.662370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.662398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.662414 | orchestrator | 2025-03-27 01:04:19.662428 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-03-27 01:04:19.662453 | orchestrator | Thursday 27 March 2025 01:02:43 +0000 (0:00:01.673) 0:00:03.825 ******** 2025-03-27 01:04:19.662493 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.662508 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.662522 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.662536 | orchestrator | 2025-03-27 01:04:19.662550 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-27 01:04:19.662564 | orchestrator | Thursday 27 March 2025 01:02:44 +0000 (0:00:00.303) 0:00:04.129 ******** 2025-03-27 01:04:19.662587 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-27 01:04:19.662602 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-03-27 01:04:19.662616 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-03-27 01:04:19.662630 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-03-27 01:04:19.662645 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-03-27 01:04:19.662660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-03-27 01:04:19.662676 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-03-27 01:04:19.662692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-27 01:04:19.662708 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-03-27 01:04:19.662724 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-03-27 01:04:19.662739 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-03-27 01:04:19.662755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-03-27 01:04:19.662772 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-03-27 01:04:19.662787 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-03-27 01:04:19.662803 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-03-27 01:04:19.662819 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-03-27 01:04:19.662836 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-03-27 01:04:19.662852 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-03-27 01:04:19.662868 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-03-27 01:04:19.662883 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-03-27 01:04:19.662899 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-03-27 01:04:19.662916 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-03-27 01:04:19.663034 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-03-27 01:04:19.663052 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-03-27 01:04:19.663066 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-03-27 01:04:19.663081 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-03-27 01:04:19.663096 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-03-27 01:04:19.663120 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-03-27 01:04:19.663134 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-03-27 01:04:19.663148 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-03-27 01:04:19.663162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-03-27 01:04:19.663175 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-03-27 01:04:19.663189 | orchestrator | 2025-03-27 01:04:19.663203 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.663217 | orchestrator | Thursday 27 March 2025 01:02:45 +0000 (0:00:01.084) 0:00:05.214 ******** 2025-03-27 01:04:19.663231 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.663245 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.663258 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.663272 | orchestrator | 2025-03-27 01:04:19.663286 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.663300 | orchestrator | Thursday 27 March 2025 01:02:45 +0000 (0:00:00.582) 0:00:05.796 ******** 2025-03-27 01:04:19.663314 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663329 | orchestrator | 2025-03-27 01:04:19.663359 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.663374 | orchestrator | Thursday 27 March 2025 01:02:45 +0000 (0:00:00.126) 0:00:05.923 ******** 2025-03-27 01:04:19.663388 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663402 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.663415 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.663429 | orchestrator | 2025-03-27 01:04:19.663443 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.663456 | orchestrator | Thursday 27 March 2025 01:02:46 +0000 (0:00:00.460) 0:00:06.383 ******** 2025-03-27 01:04:19.663493 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.663508 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.663522 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.663535 | orchestrator | 2025-03-27 01:04:19.663549 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.663563 | orchestrator | Thursday 27 March 2025 01:02:46 +0000 (0:00:00.302) 0:00:06.685 ******** 2025-03-27 01:04:19.663576 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663590 | orchestrator | 2025-03-27 01:04:19.663604 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.663617 | orchestrator | Thursday 27 March 2025 01:02:46 +0000 (0:00:00.279) 0:00:06.964 ******** 2025-03-27 01:04:19.663631 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663644 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.663661 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.663677 | orchestrator | 2025-03-27 01:04:19.663693 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.663709 | orchestrator | Thursday 27 March 2025 01:02:47 +0000 (0:00:00.362) 0:00:07.327 ******** 2025-03-27 01:04:19.663724 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.663740 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.663756 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.663777 | orchestrator | 2025-03-27 01:04:19.663793 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.663809 | orchestrator | Thursday 27 March 2025 01:02:47 +0000 (0:00:00.499) 0:00:07.826 ******** 2025-03-27 01:04:19.663831 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663848 | orchestrator | 2025-03-27 01:04:19.663863 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.663879 | orchestrator | Thursday 27 March 2025 01:02:47 +0000 (0:00:00.132) 0:00:07.959 ******** 2025-03-27 01:04:19.663895 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.663911 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.663927 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.663942 | orchestrator | 2025-03-27 01:04:19.663958 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.663973 | orchestrator | Thursday 27 March 2025 01:02:48 +0000 (0:00:00.488) 0:00:08.447 ******** 2025-03-27 01:04:19.663989 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.664005 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.664019 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.664032 | orchestrator | 2025-03-27 01:04:19.664046 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.664060 | orchestrator | Thursday 27 March 2025 01:02:48 +0000 (0:00:00.508) 0:00:08.956 ******** 2025-03-27 01:04:19.664073 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664087 | orchestrator | 2025-03-27 01:04:19.664100 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.664114 | orchestrator | Thursday 27 March 2025 01:02:49 +0000 (0:00:00.149) 0:00:09.105 ******** 2025-03-27 01:04:19.664128 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664141 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.664155 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.664169 | orchestrator | 2025-03-27 01:04:19.664183 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.664196 | orchestrator | Thursday 27 March 2025 01:02:49 +0000 (0:00:00.449) 0:00:09.554 ******** 2025-03-27 01:04:19.664210 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.664224 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.664237 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.664251 | orchestrator | 2025-03-27 01:04:19.664265 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.664279 | orchestrator | Thursday 27 March 2025 01:02:49 +0000 (0:00:00.330) 0:00:09.885 ******** 2025-03-27 01:04:19.664292 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664306 | orchestrator | 2025-03-27 01:04:19.664320 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.664333 | orchestrator | Thursday 27 March 2025 01:02:50 +0000 (0:00:00.273) 0:00:10.158 ******** 2025-03-27 01:04:19.664347 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664361 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.664374 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.664388 | orchestrator | 2025-03-27 01:04:19.664402 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.664415 | orchestrator | Thursday 27 March 2025 01:02:50 +0000 (0:00:00.315) 0:00:10.474 ******** 2025-03-27 01:04:19.664429 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.664442 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.664456 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.664534 | orchestrator | 2025-03-27 01:04:19.664550 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.664563 | orchestrator | Thursday 27 March 2025 01:02:50 +0000 (0:00:00.545) 0:00:11.019 ******** 2025-03-27 01:04:19.664577 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664591 | orchestrator | 2025-03-27 01:04:19.664605 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.664623 | orchestrator | Thursday 27 March 2025 01:02:51 +0000 (0:00:00.123) 0:00:11.143 ******** 2025-03-27 01:04:19.664635 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664647 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.664667 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.664679 | orchestrator | 2025-03-27 01:04:19.664692 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.664704 | orchestrator | Thursday 27 March 2025 01:02:51 +0000 (0:00:00.447) 0:00:11.590 ******** 2025-03-27 01:04:19.664722 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.664735 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.664747 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.664759 | orchestrator | 2025-03-27 01:04:19.664771 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.664784 | orchestrator | Thursday 27 March 2025 01:02:52 +0000 (0:00:00.494) 0:00:12.085 ******** 2025-03-27 01:04:19.664796 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664808 | orchestrator | 2025-03-27 01:04:19.664821 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.664833 | orchestrator | Thursday 27 March 2025 01:02:52 +0000 (0:00:00.160) 0:00:12.245 ******** 2025-03-27 01:04:19.664845 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.664857 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.664870 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.664882 | orchestrator | 2025-03-27 01:04:19.664894 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.664906 | orchestrator | Thursday 27 March 2025 01:02:52 +0000 (0:00:00.431) 0:00:12.677 ******** 2025-03-27 01:04:19.664919 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.664931 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.664943 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.664955 | orchestrator | 2025-03-27 01:04:19.664968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.664980 | orchestrator | Thursday 27 March 2025 01:02:53 +0000 (0:00:00.510) 0:00:13.188 ******** 2025-03-27 01:04:19.664996 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665016 | orchestrator | 2025-03-27 01:04:19.665037 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.665056 | orchestrator | Thursday 27 March 2025 01:02:53 +0000 (0:00:00.140) 0:00:13.329 ******** 2025-03-27 01:04:19.665078 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665099 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.665118 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.665131 | orchestrator | 2025-03-27 01:04:19.665143 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.665156 | orchestrator | Thursday 27 March 2025 01:02:53 +0000 (0:00:00.447) 0:00:13.776 ******** 2025-03-27 01:04:19.665168 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.665180 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.665193 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.665205 | orchestrator | 2025-03-27 01:04:19.665217 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.665229 | orchestrator | Thursday 27 March 2025 01:02:54 +0000 (0:00:00.337) 0:00:14.114 ******** 2025-03-27 01:04:19.665242 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665254 | orchestrator | 2025-03-27 01:04:19.665266 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.665278 | orchestrator | Thursday 27 March 2025 01:02:54 +0000 (0:00:00.126) 0:00:14.241 ******** 2025-03-27 01:04:19.665291 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665303 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.665315 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.665327 | orchestrator | 2025-03-27 01:04:19.665339 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.665351 | orchestrator | Thursday 27 March 2025 01:02:54 +0000 (0:00:00.457) 0:00:14.698 ******** 2025-03-27 01:04:19.665363 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.665376 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.665388 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.665408 | orchestrator | 2025-03-27 01:04:19.665420 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.665433 | orchestrator | Thursday 27 March 2025 01:02:55 +0000 (0:00:00.489) 0:00:15.188 ******** 2025-03-27 01:04:19.665445 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665457 | orchestrator | 2025-03-27 01:04:19.665486 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.665498 | orchestrator | Thursday 27 March 2025 01:02:55 +0000 (0:00:00.119) 0:00:15.308 ******** 2025-03-27 01:04:19.665510 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665528 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.665541 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.665553 | orchestrator | 2025-03-27 01:04:19.665565 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-03-27 01:04:19.665577 | orchestrator | Thursday 27 March 2025 01:02:55 +0000 (0:00:00.451) 0:00:15.759 ******** 2025-03-27 01:04:19.665589 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:19.665601 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:04:19.665614 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:04:19.665626 | orchestrator | 2025-03-27 01:04:19.665638 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-03-27 01:04:19.665650 | orchestrator | Thursday 27 March 2025 01:02:56 +0000 (0:00:00.509) 0:00:16.269 ******** 2025-03-27 01:04:19.665662 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665674 | orchestrator | 2025-03-27 01:04:19.665686 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-03-27 01:04:19.665698 | orchestrator | Thursday 27 March 2025 01:02:56 +0000 (0:00:00.137) 0:00:16.407 ******** 2025-03-27 01:04:19.665710 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.665723 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.665735 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.665747 | orchestrator | 2025-03-27 01:04:19.665759 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-03-27 01:04:19.665775 | orchestrator | Thursday 27 March 2025 01:02:56 +0000 (0:00:00.447) 0:00:16.854 ******** 2025-03-27 01:04:19.665788 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:04:19.665800 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:04:19.665812 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:04:19.665824 | orchestrator | 2025-03-27 01:04:19.665837 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-03-27 01:04:19.665849 | orchestrator | Thursday 27 March 2025 01:02:59 +0000 (0:00:02.974) 0:00:19.828 ******** 2025-03-27 01:04:19.665861 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-27 01:04:19.665880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-27 01:04:19.665893 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-03-27 01:04:19.665905 | orchestrator | 2025-03-27 01:04:19.665918 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-03-27 01:04:19.665930 | orchestrator | Thursday 27 March 2025 01:03:02 +0000 (0:00:02.512) 0:00:22.341 ******** 2025-03-27 01:04:19.665942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-27 01:04:19.665955 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-27 01:04:19.665967 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-03-27 01:04:19.665979 | orchestrator | 2025-03-27 01:04:19.665991 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-03-27 01:04:19.666003 | orchestrator | Thursday 27 March 2025 01:03:05 +0000 (0:00:03.191) 0:00:25.533 ******** 2025-03-27 01:04:19.666045 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-27 01:04:19.666067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-27 01:04:19.666080 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-03-27 01:04:19.666092 | orchestrator | 2025-03-27 01:04:19.666104 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-03-27 01:04:19.666116 | orchestrator | Thursday 27 March 2025 01:03:07 +0000 (0:00:02.457) 0:00:27.990 ******** 2025-03-27 01:04:19.666128 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.666141 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.666153 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.666165 | orchestrator | 2025-03-27 01:04:19.666177 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-03-27 01:04:19.666189 | orchestrator | Thursday 27 March 2025 01:03:08 +0000 (0:00:00.344) 0:00:28.335 ******** 2025-03-27 01:04:19.666201 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.666214 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.666226 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.666238 | orchestrator | 2025-03-27 01:04:19.666251 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-27 01:04:19.666263 | orchestrator | Thursday 27 March 2025 01:03:08 +0000 (0:00:00.476) 0:00:28.812 ******** 2025-03-27 01:04:19.666275 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:04:19.666287 | orchestrator | 2025-03-27 01:04:19.666300 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-03-27 01:04:19.666312 | orchestrator | Thursday 27 March 2025 01:03:09 +0000 (0:00:00.873) 0:00:29.685 ******** 2025-03-27 01:04:19.666332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666400 | orchestrator | 2025-03-27 01:04:19.666413 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-03-27 01:04:19.666425 | orchestrator | Thursday 27 March 2025 01:03:11 +0000 (0:00:01.806) 0:00:31.492 ******** 2025-03-27 01:04:19.666438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666451 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.666488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666511 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.666524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666538 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.666550 | orchestrator | 2025-03-27 01:04:19.666562 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-03-27 01:04:19.666574 | orchestrator | Thursday 27 March 2025 01:03:12 +0000 (0:00:00.808) 0:00:32.301 ******** 2025-03-27 01:04:19.666596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666616 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.666629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666642 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.666664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-27 01:04:19.666685 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.666697 | orchestrator | 2025-03-27 01:04:19.666709 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-03-27 01:04:19.666722 | orchestrator | Thursday 27 March 2025 01:03:13 +0000 (0:00:01.293) 0:00:33.594 ******** 2025-03-27 01:04:19.666740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-27 01:04:19.666808 | orchestrator | 2025-03-27 01:04:19.666820 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-27 01:04:19.666833 | orchestrator | Thursday 27 March 2025 01:03:19 +0000 (0:00:05.563) 0:00:39.157 ******** 2025-03-27 01:04:19.666845 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:19.666857 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:04:19.666869 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:04:19.666881 | orchestrator | 2025-03-27 01:04:19.666893 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-03-27 01:04:19.666906 | orchestrator | Thursday 27 March 2025 01:03:19 +0000 (0:00:00.558) 0:00:39.715 ******** 2025-03-27 01:04:19.666918 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:04:19.666930 | orchestrator | 2025-03-27 01:04:19.666942 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-03-27 01:04:19.666955 | orchestrator | Thursday 27 March 2025 01:03:20 +0000 (0:00:00.713) 0:00:40.429 ******** 2025-03-27 01:04:19.666967 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:04:19.666979 | orchestrator | 2025-03-27 01:04:19.666991 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-03-27 01:04:19.667003 | orchestrator | Thursday 27 March 2025 01:03:23 +0000 (0:00:02.716) 0:00:43.146 ******** 2025-03-27 01:04:19.667015 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:04:19.667027 | orchestrator | 2025-03-27 01:04:19.667040 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-03-27 01:04:19.667052 | orchestrator | Thursday 27 March 2025 01:03:25 +0000 (0:00:02.570) 0:00:45.717 ******** 2025-03-27 01:04:19.667064 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:04:19.667076 | orchestrator | 2025-03-27 01:04:19.667089 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-27 01:04:19.667105 | orchestrator | Thursday 27 March 2025 01:03:40 +0000 (0:00:15.025) 0:01:00.742 ******** 2025-03-27 01:04:19.667118 | orchestrator | 2025-03-27 01:04:19.667130 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-27 01:04:19.667142 | orchestrator | Thursday 27 March 2025 01:03:40 +0000 (0:00:00.068) 0:01:00.811 ******** 2025-03-27 01:04:19.667155 | orchestrator | 2025-03-27 01:04:19.667167 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-03-27 01:04:19.667179 | orchestrator | Thursday 27 March 2025 01:03:40 +0000 (0:00:00.210) 0:01:01.022 ******** 2025-03-27 01:04:19.667191 | orchestrator | 2025-03-27 01:04:19.667203 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-03-27 01:04:19.667216 | orchestrator | Thursday 27 March 2025 01:03:41 +0000 (0:00:00.068) 0:01:01.090 ******** 2025-03-27 01:04:19.667228 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:04:19.667240 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:04:19.667252 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:04:19.667264 | orchestrator | 2025-03-27 01:04:19.667277 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:04:19.667289 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-27 01:04:19.667309 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-27 01:04:19.667321 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-27 01:04:19.667334 | orchestrator | 2025-03-27 01:04:19.667346 | orchestrator | 2025-03-27 01:04:19.667358 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:04:19.667370 | orchestrator | Thursday 27 March 2025 01:04:17 +0000 (0:00:36.733) 0:01:37.824 ******** 2025-03-27 01:04:19.667382 | orchestrator | =============================================================================== 2025-03-27 01:04:19.667394 | orchestrator | horizon : Restart horizon container ------------------------------------ 36.73s 2025-03-27 01:04:19.667407 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.03s 2025-03-27 01:04:19.667419 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.56s 2025-03-27 01:04:19.667431 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.19s 2025-03-27 01:04:19.667443 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.97s 2025-03-27 01:04:19.667456 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.72s 2025-03-27 01:04:19.667484 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.57s 2025-03-27 01:04:19.667497 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.51s 2025-03-27 01:04:19.667509 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.46s 2025-03-27 01:04:19.667522 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.81s 2025-03-27 01:04:19.667533 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.67s 2025-03-27 01:04:19.667545 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.29s 2025-03-27 01:04:19.667558 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.08s 2025-03-27 01:04:19.667575 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.87s 2025-03-27 01:04:22.722826 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.81s 2025-03-27 01:04:22.722946 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-03-27 01:04:22.722965 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-03-27 01:04:22.722980 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-03-27 01:04:22.722995 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-03-27 01:04:22.723009 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-03-27 01:04:22.723023 | orchestrator | 2025-03-27 01:04:19 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:22.723038 | orchestrator | 2025-03-27 01:04:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:22.723052 | orchestrator | 2025-03-27 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:22.723083 | orchestrator | 2025-03-27 01:04:22 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:22.724197 | orchestrator | 2025-03-27 01:04:22 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:22.726762 | orchestrator | 2025-03-27 01:04:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:22.727238 | orchestrator | 2025-03-27 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:25.777680 | orchestrator | 2025-03-27 01:04:25 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:25.780854 | orchestrator | 2025-03-27 01:04:25 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:25.781169 | orchestrator | 2025-03-27 01:04:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:25.782249 | orchestrator | 2025-03-27 01:04:25 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:28.838342 | orchestrator | 2025-03-27 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:28.838542 | orchestrator | 2025-03-27 01:04:28 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:28.839631 | orchestrator | 2025-03-27 01:04:28 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:28.840763 | orchestrator | 2025-03-27 01:04:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:28.842238 | orchestrator | 2025-03-27 01:04:28 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:31.900733 | orchestrator | 2025-03-27 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:31.900853 | orchestrator | 2025-03-27 01:04:31 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:31.905103 | orchestrator | 2025-03-27 01:04:31 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:31.906252 | orchestrator | 2025-03-27 01:04:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:31.906295 | orchestrator | 2025-03-27 01:04:31 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:34.971173 | orchestrator | 2025-03-27 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:34.971305 | orchestrator | 2025-03-27 01:04:34 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:34.976600 | orchestrator | 2025-03-27 01:04:34 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:34.978061 | orchestrator | 2025-03-27 01:04:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:34.979607 | orchestrator | 2025-03-27 01:04:34 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:38.042158 | orchestrator | 2025-03-27 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:38.042286 | orchestrator | 2025-03-27 01:04:38 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:38.043335 | orchestrator | 2025-03-27 01:04:38 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:38.043370 | orchestrator | 2025-03-27 01:04:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:38.044517 | orchestrator | 2025-03-27 01:04:38 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:41.088887 | orchestrator | 2025-03-27 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:41.089004 | orchestrator | 2025-03-27 01:04:41 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:41.090140 | orchestrator | 2025-03-27 01:04:41 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:41.092704 | orchestrator | 2025-03-27 01:04:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:41.098185 | orchestrator | 2025-03-27 01:04:41 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:44.151257 | orchestrator | 2025-03-27 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:44.151393 | orchestrator | 2025-03-27 01:04:44 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:44.152358 | orchestrator | 2025-03-27 01:04:44 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:44.154531 | orchestrator | 2025-03-27 01:04:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:44.155657 | orchestrator | 2025-03-27 01:04:44 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:47.207716 | orchestrator | 2025-03-27 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:47.207841 | orchestrator | 2025-03-27 01:04:47 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:47.210081 | orchestrator | 2025-03-27 01:04:47 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:47.211318 | orchestrator | 2025-03-27 01:04:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:47.213169 | orchestrator | 2025-03-27 01:04:47 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:50.268702 | orchestrator | 2025-03-27 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:50.268841 | orchestrator | 2025-03-27 01:04:50 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:50.271624 | orchestrator | 2025-03-27 01:04:50 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:50.274357 | orchestrator | 2025-03-27 01:04:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:50.276930 | orchestrator | 2025-03-27 01:04:50 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:53.332168 | orchestrator | 2025-03-27 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:53.332322 | orchestrator | 2025-03-27 01:04:53 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:53.333786 | orchestrator | 2025-03-27 01:04:53 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:53.333822 | orchestrator | 2025-03-27 01:04:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:53.335529 | orchestrator | 2025-03-27 01:04:53 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state STARTED 2025-03-27 01:04:56.392651 | orchestrator | 2025-03-27 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:56.392793 | orchestrator | 2025-03-27 01:04:56 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:56.394127 | orchestrator | 2025-03-27 01:04:56 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:56.395940 | orchestrator | 2025-03-27 01:04:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:04:56.398734 | orchestrator | 2025-03-27 01:04:56 | INFO  | Task 03ff22ea-5c5e-4a5c-86da-de981a05f073 is in state SUCCESS 2025-03-27 01:04:56.401981 | orchestrator | 2025-03-27 01:04:56.402077 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 01:04:56.402097 | orchestrator | 2025-03-27 01:04:56.402122 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-03-27 01:04:56.402137 | orchestrator | 2025-03-27 01:04:56.402151 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-03-27 01:04:56.402165 | orchestrator | Thursday 27 March 2025 01:04:26 +0000 (0:00:00.755) 0:00:00.755 ******** 2025-03-27 01:04:56.402203 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-03-27 01:04:56.402219 | orchestrator | 2025-03-27 01:04:56.402233 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-03-27 01:04:56.402247 | orchestrator | Thursday 27 March 2025 01:04:26 +0000 (0:00:00.250) 0:00:01.005 ******** 2025-03-27 01:04:56.402261 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.402275 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-03-27 01:04:56.402290 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-03-27 01:04:56.402303 | orchestrator | 2025-03-27 01:04:56.402317 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-03-27 01:04:56.402331 | orchestrator | Thursday 27 March 2025 01:04:27 +0000 (0:00:00.962) 0:00:01.968 ******** 2025-03-27 01:04:56.402345 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-03-27 01:04:56.402359 | orchestrator | 2025-03-27 01:04:56.402372 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-03-27 01:04:56.402392 | orchestrator | Thursday 27 March 2025 01:04:27 +0000 (0:00:00.269) 0:00:02.237 ******** 2025-03-27 01:04:56.402406 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402421 | orchestrator | 2025-03-27 01:04:56.402435 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-03-27 01:04:56.402449 | orchestrator | Thursday 27 March 2025 01:04:28 +0000 (0:00:00.670) 0:00:02.908 ******** 2025-03-27 01:04:56.402463 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402522 | orchestrator | 2025-03-27 01:04:56.402537 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-03-27 01:04:56.402551 | orchestrator | Thursday 27 March 2025 01:04:28 +0000 (0:00:00.159) 0:00:03.067 ******** 2025-03-27 01:04:56.402566 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402583 | orchestrator | 2025-03-27 01:04:56.402599 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-03-27 01:04:56.402615 | orchestrator | Thursday 27 March 2025 01:04:28 +0000 (0:00:00.540) 0:00:03.608 ******** 2025-03-27 01:04:56.402631 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402647 | orchestrator | 2025-03-27 01:04:56.402663 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-03-27 01:04:56.402679 | orchestrator | Thursday 27 March 2025 01:04:29 +0000 (0:00:00.145) 0:00:03.753 ******** 2025-03-27 01:04:56.402696 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402712 | orchestrator | 2025-03-27 01:04:56.402728 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-03-27 01:04:56.402743 | orchestrator | Thursday 27 March 2025 01:04:29 +0000 (0:00:00.137) 0:00:03.891 ******** 2025-03-27 01:04:56.402759 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402774 | orchestrator | 2025-03-27 01:04:56.402790 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-03-27 01:04:56.402805 | orchestrator | Thursday 27 March 2025 01:04:29 +0000 (0:00:00.162) 0:00:04.054 ******** 2025-03-27 01:04:56.402821 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.402838 | orchestrator | 2025-03-27 01:04:56.402853 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-03-27 01:04:56.402869 | orchestrator | Thursday 27 March 2025 01:04:29 +0000 (0:00:00.139) 0:00:04.193 ******** 2025-03-27 01:04:56.402884 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.402900 | orchestrator | 2025-03-27 01:04:56.402915 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-03-27 01:04:56.402929 | orchestrator | Thursday 27 March 2025 01:04:29 +0000 (0:00:00.346) 0:00:04.540 ******** 2025-03-27 01:04:56.402942 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.402956 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:56.402970 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:56.402992 | orchestrator | 2025-03-27 01:04:56.403006 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-03-27 01:04:56.403019 | orchestrator | Thursday 27 March 2025 01:04:30 +0000 (0:00:00.719) 0:00:05.260 ******** 2025-03-27 01:04:56.403033 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.403047 | orchestrator | 2025-03-27 01:04:56.403061 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-03-27 01:04:56.403080 | orchestrator | Thursday 27 March 2025 01:04:30 +0000 (0:00:00.264) 0:00:05.524 ******** 2025-03-27 01:04:56.403094 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.403108 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:56.403122 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:56.403136 | orchestrator | 2025-03-27 01:04:56.403149 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-03-27 01:04:56.403163 | orchestrator | Thursday 27 March 2025 01:04:33 +0000 (0:00:02.140) 0:00:07.665 ******** 2025-03-27 01:04:56.403177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:04:56.403191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:04:56.403205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:04:56.403218 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403232 | orchestrator | 2025-03-27 01:04:56.403246 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-03-27 01:04:56.403272 | orchestrator | Thursday 27 March 2025 01:04:33 +0000 (0:00:00.463) 0:00:08.129 ******** 2025-03-27 01:04:56.403293 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403311 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403339 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403353 | orchestrator | 2025-03-27 01:04:56.403367 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-03-27 01:04:56.403381 | orchestrator | Thursday 27 March 2025 01:04:34 +0000 (0:00:00.897) 0:00:09.027 ******** 2025-03-27 01:04:56.403396 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403411 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-03-27 01:04:56.403446 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403461 | orchestrator | 2025-03-27 01:04:56.403511 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-03-27 01:04:56.403527 | orchestrator | Thursday 27 March 2025 01:04:34 +0000 (0:00:00.173) 0:00:09.201 ******** 2025-03-27 01:04:56.403545 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a90b4449bff6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-03-27 01:04:31.640684', 'end': '2025-03-27 01:04:31.681012', 'delta': '0:00:00.040328', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a90b4449bff6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:56.403563 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'e5e85aecd111', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-03-27 01:04:32.281583', 'end': '2025-03-27 01:04:32.316733', 'delta': '0:00:00.035150', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5e85aecd111'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:56.403588 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a80c9b827c1e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-03-27 01:04:32.822740', 'end': '2025-03-27 01:04:32.862180', 'delta': '0:00:00.039440', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a80c9b827c1e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-03-27 01:04:56.403604 | orchestrator | 2025-03-27 01:04:56.403618 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-03-27 01:04:56.403631 | orchestrator | Thursday 27 March 2025 01:04:34 +0000 (0:00:00.244) 0:00:09.445 ******** 2025-03-27 01:04:56.403645 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.403659 | orchestrator | 2025-03-27 01:04:56.403673 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-03-27 01:04:56.403687 | orchestrator | Thursday 27 March 2025 01:04:35 +0000 (0:00:00.290) 0:00:09.737 ******** 2025-03-27 01:04:56.403700 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-03-27 01:04:56.403714 | orchestrator | 2025-03-27 01:04:56.403728 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-03-27 01:04:56.403742 | orchestrator | Thursday 27 March 2025 01:04:36 +0000 (0:00:01.757) 0:00:11.495 ******** 2025-03-27 01:04:56.403755 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403769 | orchestrator | 2025-03-27 01:04:56.403782 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-03-27 01:04:56.403796 | orchestrator | Thursday 27 March 2025 01:04:36 +0000 (0:00:00.135) 0:00:11.630 ******** 2025-03-27 01:04:56.403809 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403823 | orchestrator | 2025-03-27 01:04:56.403846 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:04:56.403860 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:00.275) 0:00:11.905 ******** 2025-03-27 01:04:56.403873 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403887 | orchestrator | 2025-03-27 01:04:56.403901 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-03-27 01:04:56.403915 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:00.126) 0:00:12.032 ******** 2025-03-27 01:04:56.403928 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.403942 | orchestrator | 2025-03-27 01:04:56.403956 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-03-27 01:04:56.403969 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:00.134) 0:00:12.167 ******** 2025-03-27 01:04:56.403983 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.403997 | orchestrator | 2025-03-27 01:04:56.404011 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-03-27 01:04:56.404024 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:00.256) 0:00:12.424 ******** 2025-03-27 01:04:56.404038 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404061 | orchestrator | 2025-03-27 01:04:56.404075 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-03-27 01:04:56.404090 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:00.136) 0:00:12.560 ******** 2025-03-27 01:04:56.404104 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404118 | orchestrator | 2025-03-27 01:04:56.404133 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-03-27 01:04:56.404147 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.165) 0:00:12.725 ******** 2025-03-27 01:04:56.404161 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404175 | orchestrator | 2025-03-27 01:04:56.404189 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-03-27 01:04:56.404207 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.158) 0:00:12.883 ******** 2025-03-27 01:04:56.404221 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404235 | orchestrator | 2025-03-27 01:04:56.404249 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-03-27 01:04:56.404263 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.114) 0:00:12.998 ******** 2025-03-27 01:04:56.404277 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404290 | orchestrator | 2025-03-27 01:04:56.404304 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-03-27 01:04:56.404318 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.335) 0:00:13.333 ******** 2025-03-27 01:04:56.404332 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404345 | orchestrator | 2025-03-27 01:04:56.404359 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-03-27 01:04:56.404373 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.138) 0:00:13.471 ******** 2025-03-27 01:04:56.404386 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404400 | orchestrator | 2025-03-27 01:04:56.404414 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-03-27 01:04:56.404427 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:00.155) 0:00:13.626 ******** 2025-03-27 01:04:56.404441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-03-27 01:04:56.404639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a5b8b10-dd3c-4c45-a0af-94d307a6d3f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:56.404664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dbd72eb5-415c-46b6-800c-c9a4152e0b1d', 'scsi-SQEMU_QEMU_HARDDISK_dbd72eb5-415c-46b6-800c-c9a4152e0b1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:56.404680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c06239b1-1e23-4e3e-9542-3c7768e76fd7', 'scsi-SQEMU_QEMU_HARDDISK_c06239b1-1e23-4e3e-9542-3c7768e76fd7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:56.404695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c304c21c-7b61-43fc-89e5-88e0ceb08200', 'scsi-SQEMU_QEMU_HARDDISK_c304c21c-7b61-43fc-89e5-88e0ceb08200'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:56.404710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-03-27-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-03-27 01:04:56.404725 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404739 | orchestrator | 2025-03-27 01:04:56.404754 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-03-27 01:04:56.404768 | orchestrator | Thursday 27 March 2025 01:04:39 +0000 (0:00:00.324) 0:00:13.951 ******** 2025-03-27 01:04:56.404782 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404796 | orchestrator | 2025-03-27 01:04:56.404810 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-03-27 01:04:56.404823 | orchestrator | Thursday 27 March 2025 01:04:39 +0000 (0:00:00.279) 0:00:14.231 ******** 2025-03-27 01:04:56.404850 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404863 | orchestrator | 2025-03-27 01:04:56.404878 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-03-27 01:04:56.404891 | orchestrator | Thursday 27 March 2025 01:04:39 +0000 (0:00:00.137) 0:00:14.369 ******** 2025-03-27 01:04:56.404905 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.404919 | orchestrator | 2025-03-27 01:04:56.404933 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-03-27 01:04:56.404947 | orchestrator | Thursday 27 March 2025 01:04:39 +0000 (0:00:00.143) 0:00:14.513 ******** 2025-03-27 01:04:56.404966 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.404980 | orchestrator | 2025-03-27 01:04:56.404995 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-03-27 01:04:56.405008 | orchestrator | Thursday 27 March 2025 01:04:40 +0000 (0:00:00.535) 0:00:15.048 ******** 2025-03-27 01:04:56.405022 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.405036 | orchestrator | 2025-03-27 01:04:56.405050 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:04:56.405064 | orchestrator | Thursday 27 March 2025 01:04:40 +0000 (0:00:00.153) 0:00:15.202 ******** 2025-03-27 01:04:56.405077 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.405091 | orchestrator | 2025-03-27 01:04:56.405104 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:04:56.405118 | orchestrator | Thursday 27 March 2025 01:04:41 +0000 (0:00:00.531) 0:00:15.734 ******** 2025-03-27 01:04:56.405132 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.405146 | orchestrator | 2025-03-27 01:04:56.405159 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-03-27 01:04:56.405173 | orchestrator | Thursday 27 March 2025 01:04:41 +0000 (0:00:00.358) 0:00:16.092 ******** 2025-03-27 01:04:56.405187 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405200 | orchestrator | 2025-03-27 01:04:56.405214 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-03-27 01:04:56.405228 | orchestrator | Thursday 27 March 2025 01:04:41 +0000 (0:00:00.256) 0:00:16.348 ******** 2025-03-27 01:04:56.405242 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405255 | orchestrator | 2025-03-27 01:04:56.405269 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-03-27 01:04:56.405283 | orchestrator | Thursday 27 March 2025 01:04:41 +0000 (0:00:00.143) 0:00:16.492 ******** 2025-03-27 01:04:56.405297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:04:56.405310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:04:56.405324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:04:56.405338 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405352 | orchestrator | 2025-03-27 01:04:56.405365 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-03-27 01:04:56.405379 | orchestrator | Thursday 27 March 2025 01:04:42 +0000 (0:00:00.483) 0:00:16.976 ******** 2025-03-27 01:04:56.405393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:04:56.405406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:04:56.405420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:04:56.405434 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405448 | orchestrator | 2025-03-27 01:04:56.405462 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-03-27 01:04:56.405530 | orchestrator | Thursday 27 March 2025 01:04:42 +0000 (0:00:00.477) 0:00:17.453 ******** 2025-03-27 01:04:56.405547 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.405561 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-03-27 01:04:56.405575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-03-27 01:04:56.405588 | orchestrator | 2025-03-27 01:04:56.405603 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-03-27 01:04:56.405625 | orchestrator | Thursday 27 March 2025 01:04:44 +0000 (0:00:01.211) 0:00:18.664 ******** 2025-03-27 01:04:56.405639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:04:56.405653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:04:56.405667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:04:56.405681 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405694 | orchestrator | 2025-03-27 01:04:56.405708 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-03-27 01:04:56.405722 | orchestrator | Thursday 27 March 2025 01:04:44 +0000 (0:00:00.255) 0:00:18.920 ******** 2025-03-27 01:04:56.405736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-27 01:04:56.405750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-27 01:04:56.405764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-27 01:04:56.405777 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405791 | orchestrator | 2025-03-27 01:04:56.405805 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-03-27 01:04:56.405819 | orchestrator | Thursday 27 March 2025 01:04:44 +0000 (0:00:00.216) 0:00:19.136 ******** 2025-03-27 01:04:56.405832 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-03-27 01:04:56.405846 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-03-27 01:04:56.405860 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-03-27 01:04:56.405874 | orchestrator | 2025-03-27 01:04:56.405888 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-03-27 01:04:56.405901 | orchestrator | Thursday 27 March 2025 01:04:44 +0000 (0:00:00.217) 0:00:19.353 ******** 2025-03-27 01:04:56.405915 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405929 | orchestrator | 2025-03-27 01:04:56.405943 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-03-27 01:04:56.405956 | orchestrator | Thursday 27 March 2025 01:04:44 +0000 (0:00:00.139) 0:00:19.493 ******** 2025-03-27 01:04:56.405970 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:04:56.405984 | orchestrator | 2025-03-27 01:04:56.405998 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-03-27 01:04:56.406011 | orchestrator | Thursday 27 March 2025 01:04:45 +0000 (0:00:00.390) 0:00:19.884 ******** 2025-03-27 01:04:56.406057 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.406078 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:56.406093 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:56.406107 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-27 01:04:56.406120 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:04:56.406134 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:04:56.406148 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:04:56.406162 | orchestrator | 2025-03-27 01:04:56.406176 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-03-27 01:04:56.406189 | orchestrator | Thursday 27 March 2025 01:04:46 +0000 (0:00:00.878) 0:00:20.762 ******** 2025-03-27 01:04:56.406203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-27 01:04:56.406217 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-03-27 01:04:56.406231 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-03-27 01:04:56.406245 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-03-27 01:04:56.406266 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-03-27 01:04:56.406280 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-03-27 01:04:56.406294 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-03-27 01:04:56.406308 | orchestrator | 2025-03-27 01:04:56.406322 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-03-27 01:04:56.406336 | orchestrator | Thursday 27 March 2025 01:04:47 +0000 (0:00:01.654) 0:00:22.417 ******** 2025-03-27 01:04:56.406350 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:04:56.406364 | orchestrator | 2025-03-27 01:04:56.406377 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-03-27 01:04:56.406391 | orchestrator | Thursday 27 March 2025 01:04:48 +0000 (0:00:00.490) 0:00:22.907 ******** 2025-03-27 01:04:56.406405 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:04:56.406418 | orchestrator | 2025-03-27 01:04:56.406433 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-03-27 01:04:56.406452 | orchestrator | Thursday 27 March 2025 01:04:48 +0000 (0:00:00.682) 0:00:23.589 ******** 2025-03-27 01:04:56.406466 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-03-27 01:04:56.406537 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-03-27 01:04:56.406553 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-03-27 01:04:56.406567 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-03-27 01:04:56.406580 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-03-27 01:04:56.406594 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-03-27 01:04:56.406608 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-03-27 01:04:56.406621 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-03-27 01:04:56.406635 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-03-27 01:04:56.406648 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-03-27 01:04:56.406662 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-03-27 01:04:56.406676 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-03-27 01:04:56.406689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-03-27 01:04:56.406703 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-03-27 01:04:56.406716 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-03-27 01:04:56.406730 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-03-27 01:04:56.406744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-03-27 01:04:56.406758 | orchestrator | 2025-03-27 01:04:56.406771 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:04:56.406785 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-27 01:04:56.406801 | orchestrator | 2025-03-27 01:04:56.406814 | orchestrator | 2025-03-27 01:04:56.406828 | orchestrator | 2025-03-27 01:04:56.406841 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:04:56.406855 | orchestrator | Thursday 27 March 2025 01:04:55 +0000 (0:00:06.733) 0:00:30.323 ******** 2025-03-27 01:04:56.406868 | orchestrator | =============================================================================== 2025-03-27 01:04:56.406882 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.73s 2025-03-27 01:04:56.406903 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.14s 2025-03-27 01:04:56.406917 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.76s 2025-03-27 01:04:56.406937 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.65s 2025-03-27 01:04:59.438857 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.21s 2025-03-27 01:04:59.438937 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.96s 2025-03-27 01:04:59.438950 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.90s 2025-03-27 01:04:59.438960 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.88s 2025-03-27 01:04:59.438970 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2025-03-27 01:04:59.438980 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.68s 2025-03-27 01:04:59.438989 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.67s 2025-03-27 01:04:59.439006 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.54s 2025-03-27 01:04:59.439016 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.54s 2025-03-27 01:04:59.439025 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.53s 2025-03-27 01:04:59.439035 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.49s 2025-03-27 01:04:59.439044 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.48s 2025-03-27 01:04:59.439053 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.48s 2025-03-27 01:04:59.439062 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.46s 2025-03-27 01:04:59.439071 | orchestrator | ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli --- 0.39s 2025-03-27 01:04:59.439081 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.36s 2025-03-27 01:04:59.439091 | orchestrator | 2025-03-27 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:04:59.439112 | orchestrator | 2025-03-27 01:04:59 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state STARTED 2025-03-27 01:04:59.443384 | orchestrator | 2025-03-27 01:04:59 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:04:59.445187 | orchestrator | 2025-03-27 01:04:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:02.501092 | orchestrator | 2025-03-27 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:02.501234 | orchestrator | 2025-03-27 01:05:02 | INFO  | Task f11c7d54-bcad-42b7-89a0-86fcabb9595b is in state SUCCESS 2025-03-27 01:05:02.504230 | orchestrator | 2025-03-27 01:05:02 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:02.505279 | orchestrator | 2025-03-27 01:05:02 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:02.508826 | orchestrator | 2025-03-27 01:05:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:05.556291 | orchestrator | 2025-03-27 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:05.556419 | orchestrator | 2025-03-27 01:05:05 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:05.557288 | orchestrator | 2025-03-27 01:05:05 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:05.558612 | orchestrator | 2025-03-27 01:05:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:08.595264 | orchestrator | 2025-03-27 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:08.595422 | orchestrator | 2025-03-27 01:05:08 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:08.596993 | orchestrator | 2025-03-27 01:05:08 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:08.598563 | orchestrator | 2025-03-27 01:05:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:11.664949 | orchestrator | 2025-03-27 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:11.665085 | orchestrator | 2025-03-27 01:05:11 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:11.666827 | orchestrator | 2025-03-27 01:05:11 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:11.668810 | orchestrator | 2025-03-27 01:05:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:14.728009 | orchestrator | 2025-03-27 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:14.728163 | orchestrator | 2025-03-27 01:05:14 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:14.732007 | orchestrator | 2025-03-27 01:05:14 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:14.733561 | orchestrator | 2025-03-27 01:05:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:17.782248 | orchestrator | 2025-03-27 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:17.782382 | orchestrator | 2025-03-27 01:05:17 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:17.784750 | orchestrator | 2025-03-27 01:05:17 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:17.786395 | orchestrator | 2025-03-27 01:05:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:20.847000 | orchestrator | 2025-03-27 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:20.847119 | orchestrator | 2025-03-27 01:05:20 | INFO  | Task f31d80ef-d827-452e-8559-ed5bea318b8c is in state STARTED 2025-03-27 01:05:20.849071 | orchestrator | 2025-03-27 01:05:20 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:20.849625 | orchestrator | 2025-03-27 01:05:20 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:20.851701 | orchestrator | 2025-03-27 01:05:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:23.916712 | orchestrator | 2025-03-27 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:23.916838 | orchestrator | 2025-03-27 01:05:23 | INFO  | Task f31d80ef-d827-452e-8559-ed5bea318b8c is in state STARTED 2025-03-27 01:05:23.918204 | orchestrator | 2025-03-27 01:05:23 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:23.918238 | orchestrator | 2025-03-27 01:05:23 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state STARTED 2025-03-27 01:05:23.919652 | orchestrator | 2025-03-27 01:05:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:26.976799 | orchestrator | 2025-03-27 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:26.976948 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:26.977235 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task f31d80ef-d827-452e-8559-ed5bea318b8c is in state STARTED 2025-03-27 01:05:26.978879 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:26.980222 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:26.982666 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:26.986684 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task 363b1e1b-99ea-480c-bf44-b695be0d0418 is in state SUCCESS 2025-03-27 01:05:26.988392 | orchestrator | 2025-03-27 01:05:26.988432 | orchestrator | 2025-03-27 01:05:26.988448 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-03-27 01:05:26.988464 | orchestrator | 2025-03-27 01:05:26.988507 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-03-27 01:05:26.988523 | orchestrator | Thursday 27 March 2025 01:04:16 +0000 (0:00:00.182) 0:00:00.182 ******** 2025-03-27 01:05:26.988537 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-03-27 01:05:26.988551 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.988566 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.988580 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-03-27 01:05:26.988594 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.988751 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-03-27 01:05:26.988773 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-03-27 01:05:26.988789 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-03-27 01:05:26.988804 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-03-27 01:05:26.988819 | orchestrator | 2025-03-27 01:05:26.988834 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-03-27 01:05:26.988849 | orchestrator | Thursday 27 March 2025 01:04:19 +0000 (0:00:03.189) 0:00:03.371 ******** 2025-03-27 01:05:26.988864 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-03-27 01:05:26.988879 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.989339 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.989358 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-03-27 01:05:26.989372 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-03-27 01:05:26.989470 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-03-27 01:05:26.989510 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-03-27 01:05:26.989524 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-03-27 01:05:26.989538 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-03-27 01:05:26.989552 | orchestrator | 2025-03-27 01:05:26.989581 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-03-27 01:05:26.989596 | orchestrator | Thursday 27 March 2025 01:04:19 +0000 (0:00:00.258) 0:00:03.630 ******** 2025-03-27 01:05:26.989610 | orchestrator | ok: [testbed-manager] => { 2025-03-27 01:05:26.989628 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-03-27 01:05:26.989643 | orchestrator | } 2025-03-27 01:05:26.989657 | orchestrator | 2025-03-27 01:05:26.989671 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-03-27 01:05:26.989685 | orchestrator | Thursday 27 March 2025 01:04:19 +0000 (0:00:00.167) 0:00:03.798 ******** 2025-03-27 01:05:26.989714 | orchestrator | changed: [testbed-manager] 2025-03-27 01:05:26.989728 | orchestrator | 2025-03-27 01:05:26.989742 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-03-27 01:05:26.989756 | orchestrator | Thursday 27 March 2025 01:04:56 +0000 (0:00:36.413) 0:00:40.211 ******** 2025-03-27 01:05:26.989771 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-03-27 01:05:26.989785 | orchestrator | 2025-03-27 01:05:26.989799 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-03-27 01:05:26.989813 | orchestrator | Thursday 27 March 2025 01:04:56 +0000 (0:00:00.538) 0:00:40.750 ******** 2025-03-27 01:05:26.989828 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-03-27 01:05:26.989843 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-03-27 01:05:26.989857 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-03-27 01:05:26.989872 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-03-27 01:05:26.989886 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-03-27 01:05:26.989941 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-03-27 01:05:26.989958 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-03-27 01:05:26.989972 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-03-27 01:05:26.989986 | orchestrator | 2025-03-27 01:05:26.990000 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-03-27 01:05:26.990066 | orchestrator | Thursday 27 March 2025 01:05:00 +0000 (0:00:03.132) 0:00:43.883 ******** 2025-03-27 01:05:26.990085 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:05:26.990099 | orchestrator | 2025-03-27 01:05:26.990115 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:05:26.990132 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 01:05:26.990148 | orchestrator | 2025-03-27 01:05:26.990164 | orchestrator | Thursday 27 March 2025 01:05:00 +0000 (0:00:00.041) 0:00:43.924 ******** 2025-03-27 01:05:26.990179 | orchestrator | =============================================================================== 2025-03-27 01:05:26.990196 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 36.41s 2025-03-27 01:05:26.990211 | orchestrator | Check ceph keys --------------------------------------------------------- 3.19s 2025-03-27 01:05:26.990227 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.13s 2025-03-27 01:05:26.990243 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.54s 2025-03-27 01:05:26.990258 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-03-27 01:05:26.990274 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-03-27 01:05:26.990299 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-03-27 01:05:26.990315 | orchestrator | 2025-03-27 01:05:26.990331 | orchestrator | 2025-03-27 01:05:26.990346 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:05:26.990362 | orchestrator | 2025-03-27 01:05:26.990378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:05:26.990394 | orchestrator | Thursday 27 March 2025 01:02:40 +0000 (0:00:00.549) 0:00:00.549 ******** 2025-03-27 01:05:26.990410 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.990427 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.990444 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.990459 | orchestrator | 2025-03-27 01:05:26.990496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:05:26.990511 | orchestrator | Thursday 27 March 2025 01:02:40 +0000 (0:00:00.462) 0:00:01.011 ******** 2025-03-27 01:05:26.990525 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-03-27 01:05:26.990539 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-03-27 01:05:26.990553 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-03-27 01:05:26.990566 | orchestrator | 2025-03-27 01:05:26.990580 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-03-27 01:05:26.990594 | orchestrator | 2025-03-27 01:05:26.990608 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.990622 | orchestrator | Thursday 27 March 2025 01:02:41 +0000 (0:00:00.310) 0:00:01.321 ******** 2025-03-27 01:05:26.990636 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:05:26.990651 | orchestrator | 2025-03-27 01:05:26.990665 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-03-27 01:05:26.990679 | orchestrator | Thursday 27 March 2025 01:02:42 +0000 (0:00:00.877) 0:00:02.199 ******** 2025-03-27 01:05:26.990697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.990757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.990783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.990800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.990937 | orchestrator | 2025-03-27 01:05:26.990951 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-03-27 01:05:26.990966 | orchestrator | Thursday 27 March 2025 01:02:44 +0000 (0:00:02.419) 0:00:04.619 ******** 2025-03-27 01:05:26.990980 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-03-27 01:05:26.990995 | orchestrator | 2025-03-27 01:05:26.991015 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-03-27 01:05:26.991030 | orchestrator | Thursday 27 March 2025 01:02:45 +0000 (0:00:00.592) 0:00:05.211 ******** 2025-03-27 01:05:26.991044 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.991059 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.991073 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.991087 | orchestrator | 2025-03-27 01:05:26.991101 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-03-27 01:05:26.991115 | orchestrator | Thursday 27 March 2025 01:02:45 +0000 (0:00:00.555) 0:00:05.766 ******** 2025-03-27 01:05:26.991128 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:05:26.991142 | orchestrator | 2025-03-27 01:05:26.991156 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.991170 | orchestrator | Thursday 27 March 2025 01:02:46 +0000 (0:00:00.456) 0:00:06.223 ******** 2025-03-27 01:05:26.991184 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:05:26.991198 | orchestrator | 2025-03-27 01:05:26.991212 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-03-27 01:05:26.991226 | orchestrator | Thursday 27 March 2025 01:02:46 +0000 (0:00:00.670) 0:00:06.894 ******** 2025-03-27 01:05:26.991241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.991264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.991287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.991303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.991408 | orchestrator | 2025-03-27 01:05:26.991423 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-03-27 01:05:26.991437 | orchestrator | Thursday 27 March 2025 01:02:50 +0000 (0:00:03.427) 0:00:10.321 ******** 2025-03-27 01:05:26.991452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991523 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.991546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991591 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.991605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991663 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.991677 | orchestrator | 2025-03-27 01:05:26.991692 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-03-27 01:05:26.991706 | orchestrator | Thursday 27 March 2025 01:02:51 +0000 (0:00:00.853) 0:00:11.175 ******** 2025-03-27 01:05:26.991720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991764 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.991779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991836 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.991851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-27 01:05:26.991872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.991887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-27 01:05:26.991908 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.991921 | orchestrator | 2025-03-27 01:05:26.991936 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-03-27 01:05:26.991950 | orchestrator | Thursday 27 March 2025 01:02:52 +0000 (0:00:01.055) 0:00:12.230 ******** 2025-03-27 01:05:26.991972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.991988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992124 | orchestrator | 2025-03-27 01:05:26.992138 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-03-27 01:05:26.992152 | orchestrator | Thursday 27 March 2025 01:02:55 +0000 (0:00:03.477) 0:00:15.708 ******** 2025-03-27 01:05:26.992166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992326 | orchestrator | 2025-03-27 01:05:26.992340 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-03-27 01:05:26.992354 | orchestrator | Thursday 27 March 2025 01:03:02 +0000 (0:00:06.554) 0:00:22.263 ******** 2025-03-27 01:05:26.992368 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.992382 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:05:26.992396 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:05:26.992410 | orchestrator | 2025-03-27 01:05:26.992424 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-03-27 01:05:26.992438 | orchestrator | Thursday 27 March 2025 01:03:04 +0000 (0:00:02.368) 0:00:24.631 ******** 2025-03-27 01:05:26.992452 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.992465 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.992511 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.992526 | orchestrator | 2025-03-27 01:05:26.992540 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-03-27 01:05:26.992554 | orchestrator | Thursday 27 March 2025 01:03:05 +0000 (0:00:00.983) 0:00:25.614 ******** 2025-03-27 01:05:26.992567 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.992588 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.992602 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.992615 | orchestrator | 2025-03-27 01:05:26.992629 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-03-27 01:05:26.992643 | orchestrator | Thursday 27 March 2025 01:03:06 +0000 (0:00:00.539) 0:00:26.153 ******** 2025-03-27 01:05:26.992657 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.992677 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.992691 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.992704 | orchestrator | 2025-03-27 01:05:26.992718 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-03-27 01:05:26.992732 | orchestrator | Thursday 27 March 2025 01:03:06 +0000 (0:00:00.670) 0:00:26.824 ******** 2025-03-27 01:05:26.992747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.992838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-27 01:05:26.992852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.992902 | orchestrator | 2025-03-27 01:05:26.992916 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.992931 | orchestrator | Thursday 27 March 2025 01:03:09 +0000 (0:00:02.900) 0:00:29.724 ******** 2025-03-27 01:05:26.992952 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.992966 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.992980 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.992994 | orchestrator | 2025-03-27 01:05:26.993008 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-03-27 01:05:26.993021 | orchestrator | Thursday 27 March 2025 01:03:10 +0000 (0:00:00.340) 0:00:30.065 ******** 2025-03-27 01:05:26.993035 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-27 01:05:26.993049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-27 01:05:26.993064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-03-27 01:05:26.993078 | orchestrator | 2025-03-27 01:05:26.993091 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-03-27 01:05:26.993106 | orchestrator | Thursday 27 March 2025 01:03:12 +0000 (0:00:02.240) 0:00:32.305 ******** 2025-03-27 01:05:26.993120 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:05:26.993133 | orchestrator | 2025-03-27 01:05:26.993153 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-03-27 01:05:26.993167 | orchestrator | Thursday 27 March 2025 01:03:12 +0000 (0:00:00.674) 0:00:32.980 ******** 2025-03-27 01:05:26.993181 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.993195 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.993209 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.993223 | orchestrator | 2025-03-27 01:05:26.993237 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-03-27 01:05:26.993251 | orchestrator | Thursday 27 March 2025 01:03:13 +0000 (0:00:00.930) 0:00:33.910 ******** 2025-03-27 01:05:26.993265 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-27 01:05:26.993279 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:05:26.993293 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-27 01:05:26.993307 | orchestrator | 2025-03-27 01:05:26.993321 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-03-27 01:05:26.993335 | orchestrator | Thursday 27 March 2025 01:03:15 +0000 (0:00:01.203) 0:00:35.114 ******** 2025-03-27 01:05:26.993348 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.993362 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.993376 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.993390 | orchestrator | 2025-03-27 01:05:26.993404 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-03-27 01:05:26.993418 | orchestrator | Thursday 27 March 2025 01:03:15 +0000 (0:00:00.421) 0:00:35.536 ******** 2025-03-27 01:05:26.993432 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-27 01:05:26.993446 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-27 01:05:26.993460 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-03-27 01:05:26.993490 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-27 01:05:26.993505 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-27 01:05:26.993519 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-03-27 01:05:26.993533 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-27 01:05:26.993547 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-27 01:05:26.993561 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-03-27 01:05:26.993575 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-27 01:05:26.993597 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-27 01:05:26.993616 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-03-27 01:05:26.993631 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-27 01:05:26.993650 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-27 01:05:26.993664 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-03-27 01:05:26.993678 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:05:26.993692 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:05:26.993706 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:05:26.993720 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:05:26.993734 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:05:26.993747 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:05:26.993761 | orchestrator | 2025-03-27 01:05:26.993775 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-03-27 01:05:26.993789 | orchestrator | Thursday 27 March 2025 01:03:27 +0000 (0:00:11.842) 0:00:47.378 ******** 2025-03-27 01:05:26.993802 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:05:26.993816 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:05:26.993830 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:05:26.993844 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:05:26.993858 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:05:26.993871 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:05:26.993885 | orchestrator | 2025-03-27 01:05:26.993899 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-03-27 01:05:26.993912 | orchestrator | Thursday 27 March 2025 01:03:30 +0000 (0:00:03.354) 0:00:50.733 ******** 2025-03-27 01:05:26.993927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.993942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.993972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-27 01:05:26.993988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-03-27 01:05:26.994119 | orchestrator | 2025-03-27 01:05:26.994133 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.994147 | orchestrator | Thursday 27 March 2025 01:03:33 +0000 (0:00:03.226) 0:00:53.959 ******** 2025-03-27 01:05:26.994161 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.994175 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.994189 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.994203 | orchestrator | 2025-03-27 01:05:26.994217 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-03-27 01:05:26.994230 | orchestrator | Thursday 27 March 2025 01:03:34 +0000 (0:00:00.347) 0:00:54.307 ******** 2025-03-27 01:05:26.994336 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.994353 | orchestrator | 2025-03-27 01:05:26.994367 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-03-27 01:05:26.994381 | orchestrator | Thursday 27 March 2025 01:03:37 +0000 (0:00:02.783) 0:00:57.091 ******** 2025-03-27 01:05:26.994394 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.994408 | orchestrator | 2025-03-27 01:05:26.994422 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-03-27 01:05:26.994441 | orchestrator | Thursday 27 March 2025 01:03:39 +0000 (0:00:02.454) 0:00:59.545 ******** 2025-03-27 01:05:26.994456 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.994470 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.994540 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.994555 | orchestrator | 2025-03-27 01:05:26.994569 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-03-27 01:05:26.994582 | orchestrator | Thursday 27 March 2025 01:03:40 +0000 (0:00:01.151) 0:01:00.697 ******** 2025-03-27 01:05:26.994596 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.994610 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.994624 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.994637 | orchestrator | 2025-03-27 01:05:26.994650 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-03-27 01:05:26.994662 | orchestrator | Thursday 27 March 2025 01:03:41 +0000 (0:00:00.433) 0:01:01.131 ******** 2025-03-27 01:05:26.994674 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.994687 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.994699 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.994711 | orchestrator | 2025-03-27 01:05:26.994723 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-03-27 01:05:26.994744 | orchestrator | Thursday 27 March 2025 01:03:41 +0000 (0:00:00.503) 0:01:01.634 ******** 2025-03-27 01:05:26.994756 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.994768 | orchestrator | 2025-03-27 01:05:26.994780 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-03-27 01:05:26.994793 | orchestrator | Thursday 27 March 2025 01:03:55 +0000 (0:00:13.628) 0:01:15.263 ******** 2025-03-27 01:05:26.994805 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.994817 | orchestrator | 2025-03-27 01:05:26.994829 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-27 01:05:26.994842 | orchestrator | Thursday 27 March 2025 01:04:05 +0000 (0:00:09.941) 0:01:25.205 ******** 2025-03-27 01:05:26.994854 | orchestrator | 2025-03-27 01:05:26.994866 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-27 01:05:26.994878 | orchestrator | Thursday 27 March 2025 01:04:05 +0000 (0:00:00.065) 0:01:25.270 ******** 2025-03-27 01:05:26.994890 | orchestrator | 2025-03-27 01:05:26.994902 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-03-27 01:05:26.994914 | orchestrator | Thursday 27 March 2025 01:04:05 +0000 (0:00:00.075) 0:01:25.346 ******** 2025-03-27 01:05:26.994926 | orchestrator | 2025-03-27 01:05:26.994939 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-03-27 01:05:26.994951 | orchestrator | Thursday 27 March 2025 01:04:05 +0000 (0:00:00.059) 0:01:25.406 ******** 2025-03-27 01:05:26.994966 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.994979 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:05:26.994993 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:05:26.995007 | orchestrator | 2025-03-27 01:05:26.995021 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-03-27 01:05:26.995035 | orchestrator | Thursday 27 March 2025 01:04:14 +0000 (0:00:08.995) 0:01:34.401 ******** 2025-03-27 01:05:26.995049 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.995063 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:05:26.995077 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:05:26.995091 | orchestrator | 2025-03-27 01:05:26.995105 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-03-27 01:05:26.995118 | orchestrator | Thursday 27 March 2025 01:04:24 +0000 (0:00:09.812) 0:01:44.214 ******** 2025-03-27 01:05:26.995133 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.995147 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:05:26.995161 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:05:26.995175 | orchestrator | 2025-03-27 01:05:26.995189 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.995203 | orchestrator | Thursday 27 March 2025 01:04:34 +0000 (0:00:10.762) 0:01:54.976 ******** 2025-03-27 01:05:26.995217 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:05:26.995231 | orchestrator | 2025-03-27 01:05:26.995245 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-03-27 01:05:26.995265 | orchestrator | Thursday 27 March 2025 01:04:35 +0000 (0:00:00.963) 0:01:55.939 ******** 2025-03-27 01:05:26.995282 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.995296 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:05:26.995310 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:05:26.995324 | orchestrator | 2025-03-27 01:05:26.995336 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-03-27 01:05:26.995348 | orchestrator | Thursday 27 March 2025 01:04:37 +0000 (0:00:01.109) 0:01:57.049 ******** 2025-03-27 01:05:26.995360 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:05:26.995373 | orchestrator | 2025-03-27 01:05:26.995385 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-03-27 01:05:26.995397 | orchestrator | Thursday 27 March 2025 01:04:38 +0000 (0:00:01.547) 0:01:58.596 ******** 2025-03-27 01:05:26.995409 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-03-27 01:05:26.995427 | orchestrator | 2025-03-27 01:05:26.995440 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-03-27 01:05:26.995452 | orchestrator | Thursday 27 March 2025 01:04:49 +0000 (0:00:10.662) 0:02:09.259 ******** 2025-03-27 01:05:26.995464 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-03-27 01:05:26.995492 | orchestrator | 2025-03-27 01:05:26.995505 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-03-27 01:05:26.995526 | orchestrator | Thursday 27 March 2025 01:05:11 +0000 (0:00:21.955) 0:02:31.214 ******** 2025-03-27 01:05:26.995539 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-03-27 01:05:26.995552 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-03-27 01:05:26.995564 | orchestrator | 2025-03-27 01:05:26.995576 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-03-27 01:05:26.995593 | orchestrator | Thursday 27 March 2025 01:05:19 +0000 (0:00:08.037) 0:02:39.251 ******** 2025-03-27 01:05:26.995605 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.995618 | orchestrator | 2025-03-27 01:05:26.995630 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-03-27 01:05:26.995643 | orchestrator | Thursday 27 March 2025 01:05:19 +0000 (0:00:00.131) 0:02:39.382 ******** 2025-03-27 01:05:26.995655 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.995667 | orchestrator | 2025-03-27 01:05:26.995679 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-03-27 01:05:26.995691 | orchestrator | Thursday 27 March 2025 01:05:19 +0000 (0:00:00.146) 0:02:39.529 ******** 2025-03-27 01:05:26.995703 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.995715 | orchestrator | 2025-03-27 01:05:26.995727 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-03-27 01:05:26.995739 | orchestrator | Thursday 27 March 2025 01:05:19 +0000 (0:00:00.129) 0:02:39.659 ******** 2025-03-27 01:05:26.995751 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.995763 | orchestrator | 2025-03-27 01:05:26.995776 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-03-27 01:05:26.995788 | orchestrator | Thursday 27 March 2025 01:05:20 +0000 (0:00:00.426) 0:02:40.085 ******** 2025-03-27 01:05:26.995800 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:05:26.995813 | orchestrator | 2025-03-27 01:05:26.995825 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-03-27 01:05:26.995837 | orchestrator | Thursday 27 March 2025 01:05:23 +0000 (0:00:03.665) 0:02:43.751 ******** 2025-03-27 01:05:26.995849 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:05:26.995861 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:05:26.995878 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:05:26.995891 | orchestrator | 2025-03-27 01:05:26.995903 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:05:26.995915 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-27 01:05:26.995928 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-27 01:05:26.995940 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-27 01:05:26.995953 | orchestrator | 2025-03-27 01:05:26.995965 | orchestrator | 2025-03-27 01:05:26.995977 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:05:26.995989 | orchestrator | Thursday 27 March 2025 01:05:24 +0000 (0:00:00.596) 0:02:44.347 ******** 2025-03-27 01:05:26.996002 | orchestrator | =============================================================================== 2025-03-27 01:05:26.996014 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.96s 2025-03-27 01:05:26.996032 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.63s 2025-03-27 01:05:26.996044 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 11.84s 2025-03-27 01:05:26.996057 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.76s 2025-03-27 01:05:26.996069 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.66s 2025-03-27 01:05:26.996081 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.94s 2025-03-27 01:05:26.996094 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.81s 2025-03-27 01:05:26.996106 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.00s 2025-03-27 01:05:26.996118 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.04s 2025-03-27 01:05:26.996130 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.55s 2025-03-27 01:05:26.996147 | orchestrator | keystone : Creating default user role ----------------------------------- 3.67s 2025-03-27 01:05:30.047121 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.48s 2025-03-27 01:05:30.047245 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.43s 2025-03-27 01:05:30.047264 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.35s 2025-03-27 01:05:30.047280 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.23s 2025-03-27 01:05:30.047311 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.90s 2025-03-27 01:05:30.047326 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.78s 2025-03-27 01:05:30.047339 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2025-03-27 01:05:30.047353 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.42s 2025-03-27 01:05:30.047367 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.37s 2025-03-27 01:05:30.047382 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:30.047397 | orchestrator | 2025-03-27 01:05:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:30.047411 | orchestrator | 2025-03-27 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:30.047580 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:30.047611 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task f31d80ef-d827-452e-8559-ed5bea318b8c is in state STARTED 2025-03-27 01:05:30.048028 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:30.048886 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:30.051466 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:30.052172 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:30.054613 | orchestrator | 2025-03-27 01:05:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:33.095254 | orchestrator | 2025-03-27 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:33.095387 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:33.095677 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task f31d80ef-d827-452e-8559-ed5bea318b8c is in state SUCCESS 2025-03-27 01:05:33.096634 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:33.097519 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:33.098150 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:33.098973 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:33.099917 | orchestrator | 2025-03-27 01:05:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:36.147844 | orchestrator | 2025-03-27 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:36.147968 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:36.148450 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:36.148600 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:36.148635 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:36.149106 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:36.149784 | orchestrator | 2025-03-27 01:05:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:39.192867 | orchestrator | 2025-03-27 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:39.193032 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:39.194626 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:39.196265 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:39.198279 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:39.200027 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:39.200961 | orchestrator | 2025-03-27 01:05:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:39.201614 | orchestrator | 2025-03-27 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:42.251558 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:42.253598 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:42.255869 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:42.258117 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:42.260290 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:42.261824 | orchestrator | 2025-03-27 01:05:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:45.320125 | orchestrator | 2025-03-27 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:45.320260 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:45.322443 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:45.325211 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:45.326823 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:45.328799 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:45.330731 | orchestrator | 2025-03-27 01:05:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:48.374336 | orchestrator | 2025-03-27 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:48.374461 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:48.375324 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:48.376613 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:48.377802 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:48.379150 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:48.384871 | orchestrator | 2025-03-27 01:05:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:51.432912 | orchestrator | 2025-03-27 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:51.433031 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:51.433752 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:51.434592 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:51.435683 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:51.436791 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:51.439177 | orchestrator | 2025-03-27 01:05:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:54.489948 | orchestrator | 2025-03-27 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:54.490119 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:54.495778 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:54.499057 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:54.502196 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:54.505100 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:54.506920 | orchestrator | 2025-03-27 01:05:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:05:57.562576 | orchestrator | 2025-03-27 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:05:57.562719 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:05:57.563538 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:05:57.564706 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:05:57.565715 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:05:57.567919 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:05:57.568990 | orchestrator | 2025-03-27 01:05:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:00.614167 | orchestrator | 2025-03-27 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:00.614301 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:00.615758 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state STARTED 2025-03-27 01:06:00.618129 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:06:00.620232 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:00.621549 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:00.624902 | orchestrator | 2025-03-27 01:06:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:00.625397 | orchestrator | 2025-03-27 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:03.672339 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:03.675241 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task cb818f0c-f960-4af6-b0a6-05ed7374daa4 is in state SUCCESS 2025-03-27 01:06:03.678057 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:06:03.678755 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:03.679903 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:03.680990 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:03.684940 | orchestrator | 2025-03-27 01:06:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:06.746597 | orchestrator | 2025-03-27 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:06.746711 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:06.751269 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:06:06.753714 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:06.753736 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:06.753753 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:06.756553 | orchestrator | 2025-03-27 01:06:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:09.801137 | orchestrator | 2025-03-27 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:09.801266 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:09.805888 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state STARTED 2025-03-27 01:06:09.808637 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:09.811404 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:09.814201 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:09.816723 | orchestrator | 2025-03-27 01:06:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:09.817165 | orchestrator | 2025-03-27 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:12.877091 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:12.878779 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 7ce353d4-a45e-42f6-a6f4-47a1834fdc4f is in state SUCCESS 2025-03-27 01:06:12.881628 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:12.884144 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:12.886614 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:12.888849 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:12.890327 | orchestrator | 2025-03-27 01:06:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:15.942285 | orchestrator | 2025-03-27 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:15.942412 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:15.943545 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:15.945157 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:15.946010 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:15.948364 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:15.950754 | orchestrator | 2025-03-27 01:06:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:18.990784 | orchestrator | 2025-03-27 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:18.990909 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:18.991501 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:18.991535 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:18.992172 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:18.992987 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:18.993810 | orchestrator | 2025-03-27 01:06:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:18.994097 | orchestrator | 2025-03-27 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:22.030853 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:22.031679 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:22.031726 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:22.032419 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:22.034347 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:22.034928 | orchestrator | 2025-03-27 01:06:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:25.071875 | orchestrator | 2025-03-27 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:25.072003 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:25.072215 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:25.073030 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:25.074370 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:25.076867 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:28.122171 | orchestrator | 2025-03-27 01:06:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:28.122265 | orchestrator | 2025-03-27 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:28.122292 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:28.124360 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:28.125027 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:28.125046 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:28.125921 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:28.126741 | orchestrator | 2025-03-27 01:06:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:31.170150 | orchestrator | 2025-03-27 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:31.170289 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:31.170730 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:31.171532 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:31.172220 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:31.173024 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:31.173717 | orchestrator | 2025-03-27 01:06:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:34.217315 | orchestrator | 2025-03-27 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:34.217445 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:34.218080 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:34.218119 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:34.220236 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:34.222712 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:34.223767 | orchestrator | 2025-03-27 01:06:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:37.257785 | orchestrator | 2025-03-27 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:37.257919 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:37.260160 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:37.260871 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:37.261704 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:37.262724 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:37.266217 | orchestrator | 2025-03-27 01:06:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:40.308293 | orchestrator | 2025-03-27 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:40.308438 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:40.309857 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state STARTED 2025-03-27 01:06:40.312113 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:40.313323 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:40.314813 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:40.318304 | orchestrator | 2025-03-27 01:06:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:40.320208 | orchestrator | 2025-03-27 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:43.366286 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:43.367378 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task 728b86d7-84c8-4a5d-8fc1-5608a21eded4 is in state SUCCESS 2025-03-27 01:06:43.367818 | orchestrator | 2025-03-27 01:06:43.367854 | orchestrator | None 2025-03-27 01:06:43.367869 | orchestrator | 2025-03-27 01:06:43.367884 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-03-27 01:06:43.367899 | orchestrator | 2025-03-27 01:06:43.367913 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-03-27 01:06:43.367927 | orchestrator | Thursday 27 March 2025 01:05:03 +0000 (0:00:00.190) 0:00:00.190 ******** 2025-03-27 01:06:43.367941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-03-27 01:06:43.367966 | orchestrator | 2025-03-27 01:06:43.367981 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-03-27 01:06:43.368018 | orchestrator | Thursday 27 March 2025 01:05:03 +0000 (0:00:00.231) 0:00:00.422 ******** 2025-03-27 01:06:43.368033 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-03-27 01:06:43.368047 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-03-27 01:06:43.368062 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-03-27 01:06:43.368077 | orchestrator | 2025-03-27 01:06:43.368091 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-03-27 01:06:43.368105 | orchestrator | Thursday 27 March 2025 01:05:05 +0000 (0:00:01.292) 0:00:01.715 ******** 2025-03-27 01:06:43.368119 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-03-27 01:06:43.368133 | orchestrator | 2025-03-27 01:06:43.368147 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-03-27 01:06:43.368160 | orchestrator | Thursday 27 March 2025 01:05:06 +0000 (0:00:01.235) 0:00:02.951 ******** 2025-03-27 01:06:43.368174 | orchestrator | changed: [testbed-manager] 2025-03-27 01:06:43.368194 | orchestrator | 2025-03-27 01:06:43.368209 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-03-27 01:06:43.368222 | orchestrator | Thursday 27 March 2025 01:05:07 +0000 (0:00:00.988) 0:00:03.940 ******** 2025-03-27 01:06:43.368236 | orchestrator | changed: [testbed-manager] 2025-03-27 01:06:43.368250 | orchestrator | 2025-03-27 01:06:43.368263 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-03-27 01:06:43.368277 | orchestrator | Thursday 27 March 2025 01:05:08 +0000 (0:00:01.106) 0:00:05.047 ******** 2025-03-27 01:06:43.368291 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-03-27 01:06:43.368304 | orchestrator | ok: [testbed-manager] 2025-03-27 01:06:43.368318 | orchestrator | 2025-03-27 01:06:43.368332 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-03-27 01:06:43.368346 | orchestrator | Thursday 27 March 2025 01:05:49 +0000 (0:00:40.891) 0:00:45.938 ******** 2025-03-27 01:06:43.368360 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-03-27 01:06:43.368374 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-03-27 01:06:43.368522 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-03-27 01:06:43.368541 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-03-27 01:06:43.368557 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-03-27 01:06:43.368572 | orchestrator | 2025-03-27 01:06:43.368588 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-03-27 01:06:43.368603 | orchestrator | Thursday 27 March 2025 01:05:53 +0000 (0:00:04.455) 0:00:50.394 ******** 2025-03-27 01:06:43.368620 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-03-27 01:06:43.368636 | orchestrator | 2025-03-27 01:06:43.368651 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-03-27 01:06:43.368667 | orchestrator | Thursday 27 March 2025 01:05:54 +0000 (0:00:00.543) 0:00:50.938 ******** 2025-03-27 01:06:43.368683 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:06:43.368705 | orchestrator | 2025-03-27 01:06:43.368721 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-03-27 01:06:43.368738 | orchestrator | Thursday 27 March 2025 01:05:54 +0000 (0:00:00.150) 0:00:51.088 ******** 2025-03-27 01:06:43.368753 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:06:43.368770 | orchestrator | 2025-03-27 01:06:43.368785 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-03-27 01:06:43.368802 | orchestrator | Thursday 27 March 2025 01:05:54 +0000 (0:00:00.310) 0:00:51.399 ******** 2025-03-27 01:06:43.368818 | orchestrator | changed: [testbed-manager] 2025-03-27 01:06:43.368832 | orchestrator | 2025-03-27 01:06:43.368845 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-03-27 01:06:43.368859 | orchestrator | Thursday 27 March 2025 01:05:56 +0000 (0:00:01.959) 0:00:53.359 ******** 2025-03-27 01:06:43.368873 | orchestrator | changed: [testbed-manager] 2025-03-27 01:06:43.368896 | orchestrator | 2025-03-27 01:06:43.368910 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-03-27 01:06:43.368924 | orchestrator | Thursday 27 March 2025 01:05:58 +0000 (0:00:01.125) 0:00:54.485 ******** 2025-03-27 01:06:43.368938 | orchestrator | changed: [testbed-manager] 2025-03-27 01:06:43.368957 | orchestrator | 2025-03-27 01:06:43.368970 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-03-27 01:06:43.368984 | orchestrator | Thursday 27 March 2025 01:05:58 +0000 (0:00:00.667) 0:00:55.153 ******** 2025-03-27 01:06:43.368998 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-03-27 01:06:43.369012 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-03-27 01:06:43.369026 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-03-27 01:06:43.369040 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-03-27 01:06:43.369053 | orchestrator | 2025-03-27 01:06:43.369067 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:06:43.369081 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-27 01:06:43.369096 | orchestrator | 2025-03-27 01:06:43.369120 | orchestrator | Thursday 27 March 2025 01:06:00 +0000 (0:00:01.641) 0:00:56.794 ******** 2025-03-27 01:06:43.369134 | orchestrator | =============================================================================== 2025-03-27 01:06:43.369148 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.89s 2025-03-27 01:06:43.369162 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.46s 2025-03-27 01:06:43.369175 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.96s 2025-03-27 01:06:43.369189 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.64s 2025-03-27 01:06:43.369203 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2025-03-27 01:06:43.369216 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.24s 2025-03-27 01:06:43.369230 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.13s 2025-03-27 01:06:43.369243 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.11s 2025-03-27 01:06:43.369257 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2025-03-27 01:06:43.369270 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-03-27 01:06:43.369284 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.54s 2025-03-27 01:06:43.369298 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-03-27 01:06:43.369311 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-03-27 01:06:43.369325 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-03-27 01:06:43.369338 | orchestrator | 2025-03-27 01:06:43.369352 | orchestrator | 2025-03-27 01:06:43.369366 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-03-27 01:06:43.369379 | orchestrator | 2025-03-27 01:06:43.369393 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-03-27 01:06:43.369407 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.219) 0:00:00.219 ******** 2025-03-27 01:06:43.369421 | orchestrator | changed: [localhost] 2025-03-27 01:06:43.369435 | orchestrator | 2025-03-27 01:06:43.369448 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-03-27 01:06:43.369462 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.704) 0:00:00.924 ******** 2025-03-27 01:06:43.369614 | orchestrator | changed: [localhost] 2025-03-27 01:06:43.369636 | orchestrator | 2025-03-27 01:06:43.369650 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-03-27 01:06:43.369664 | orchestrator | Thursday 27 March 2025 01:06:04 +0000 (0:00:32.659) 0:00:33.583 ******** 2025-03-27 01:06:43.369688 | orchestrator | changed: [localhost] 2025-03-27 01:06:43.369702 | orchestrator | 2025-03-27 01:06:43.369716 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:06:43.369729 | orchestrator | 2025-03-27 01:06:43.369743 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:06:43.369757 | orchestrator | Thursday 27 March 2025 01:06:08 +0000 (0:00:04.293) 0:00:37.877 ******** 2025-03-27 01:06:43.369770 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:06:43.369784 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:06:43.369798 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:06:43.369812 | orchestrator | 2025-03-27 01:06:43.369825 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:06:43.369845 | orchestrator | Thursday 27 March 2025 01:06:09 +0000 (0:00:00.481) 0:00:38.359 ******** 2025-03-27 01:06:43.369859 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-03-27 01:06:43.369872 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-03-27 01:06:43.369886 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-03-27 01:06:43.369900 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-03-27 01:06:43.369914 | orchestrator | 2025-03-27 01:06:43.369928 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-03-27 01:06:43.369941 | orchestrator | skipping: no hosts matched 2025-03-27 01:06:43.369955 | orchestrator | 2025-03-27 01:06:43.369969 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:06:43.369983 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:06:43.369998 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:06:43.370012 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:06:43.370074 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:06:43.370088 | orchestrator | 2025-03-27 01:06:43.370102 | orchestrator | 2025-03-27 01:06:43.370116 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:06:43.370129 | orchestrator | Thursday 27 March 2025 01:06:09 +0000 (0:00:00.516) 0:00:38.876 ******** 2025-03-27 01:06:43.370143 | orchestrator | =============================================================================== 2025-03-27 01:06:43.370157 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.66s 2025-03-27 01:06:43.370171 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.29s 2025-03-27 01:06:43.370184 | orchestrator | Ensure the destination directory exists --------------------------------- 0.70s 2025-03-27 01:06:43.370198 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-03-27 01:06:43.370211 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-03-27 01:06:43.370225 | orchestrator | 2025-03-27 01:06:43.370250 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:43.370727 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:43.370755 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:43.371936 | orchestrator | 2025-03-27 01:06:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:46.403906 | orchestrator | 2025-03-27 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:46.404027 | orchestrator | 2025-03-27 01:06:46 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:46.404269 | orchestrator | 2025-03-27 01:06:46 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:46.406161 | orchestrator | 2025-03-27 01:06:46 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:46.406687 | orchestrator | 2025-03-27 01:06:46 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:46.407736 | orchestrator | 2025-03-27 01:06:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:49.443409 | orchestrator | 2025-03-27 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:49.443733 | orchestrator | 2025-03-27 01:06:49 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:49.444394 | orchestrator | 2025-03-27 01:06:49 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:49.444432 | orchestrator | 2025-03-27 01:06:49 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:49.444891 | orchestrator | 2025-03-27 01:06:49 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:49.445942 | orchestrator | 2025-03-27 01:06:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:52.494611 | orchestrator | 2025-03-27 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:52.494697 | orchestrator | 2025-03-27 01:06:52 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:52.498586 | orchestrator | 2025-03-27 01:06:52 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:52.505717 | orchestrator | 2025-03-27 01:06:52 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:52.508148 | orchestrator | 2025-03-27 01:06:52 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:52.508797 | orchestrator | 2025-03-27 01:06:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:55.550691 | orchestrator | 2025-03-27 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:55.550831 | orchestrator | 2025-03-27 01:06:55 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:55.550997 | orchestrator | 2025-03-27 01:06:55 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:55.553830 | orchestrator | 2025-03-27 01:06:55 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:55.555903 | orchestrator | 2025-03-27 01:06:55 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:55.557030 | orchestrator | 2025-03-27 01:06:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:58.617962 | orchestrator | 2025-03-27 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:06:58.618139 | orchestrator | 2025-03-27 01:06:58 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:06:58.619137 | orchestrator | 2025-03-27 01:06:58 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:06:58.619169 | orchestrator | 2025-03-27 01:06:58 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:06:58.620916 | orchestrator | 2025-03-27 01:06:58 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:06:58.621718 | orchestrator | 2025-03-27 01:06:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:06:58.622087 | orchestrator | 2025-03-27 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:01.673946 | orchestrator | 2025-03-27 01:07:01 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:01.674787 | orchestrator | 2025-03-27 01:07:01 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:01.674844 | orchestrator | 2025-03-27 01:07:01 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:01.675309 | orchestrator | 2025-03-27 01:07:01 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:01.676337 | orchestrator | 2025-03-27 01:07:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:04.713193 | orchestrator | 2025-03-27 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:04.713329 | orchestrator | 2025-03-27 01:07:04 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:04.713680 | orchestrator | 2025-03-27 01:07:04 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:04.713715 | orchestrator | 2025-03-27 01:07:04 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:04.714432 | orchestrator | 2025-03-27 01:07:04 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:04.715114 | orchestrator | 2025-03-27 01:07:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:07.751120 | orchestrator | 2025-03-27 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:07.751243 | orchestrator | 2025-03-27 01:07:07 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:07.751792 | orchestrator | 2025-03-27 01:07:07 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:07.752350 | orchestrator | 2025-03-27 01:07:07 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:07.753625 | orchestrator | 2025-03-27 01:07:07 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:07.754195 | orchestrator | 2025-03-27 01:07:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:07.755472 | orchestrator | 2025-03-27 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:10.785006 | orchestrator | 2025-03-27 01:07:10 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:10.785757 | orchestrator | 2025-03-27 01:07:10 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:10.786594 | orchestrator | 2025-03-27 01:07:10 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:10.787776 | orchestrator | 2025-03-27 01:07:10 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:10.790647 | orchestrator | 2025-03-27 01:07:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:13.828730 | orchestrator | 2025-03-27 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:13.829018 | orchestrator | 2025-03-27 01:07:13 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:13.829924 | orchestrator | 2025-03-27 01:07:13 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:13.829961 | orchestrator | 2025-03-27 01:07:13 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:13.833363 | orchestrator | 2025-03-27 01:07:13 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:16.871799 | orchestrator | 2025-03-27 01:07:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:16.871926 | orchestrator | 2025-03-27 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:16.871963 | orchestrator | 2025-03-27 01:07:16 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:16.872930 | orchestrator | 2025-03-27 01:07:16 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:16.872963 | orchestrator | 2025-03-27 01:07:16 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:16.872984 | orchestrator | 2025-03-27 01:07:16 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:16.873932 | orchestrator | 2025-03-27 01:07:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:19.904544 | orchestrator | 2025-03-27 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:19.904685 | orchestrator | 2025-03-27 01:07:19 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:19.904821 | orchestrator | 2025-03-27 01:07:19 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:19.909481 | orchestrator | 2025-03-27 01:07:19 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:19.911103 | orchestrator | 2025-03-27 01:07:19 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:19.911909 | orchestrator | 2025-03-27 01:07:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:22.959090 | orchestrator | 2025-03-27 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:22.959207 | orchestrator | 2025-03-27 01:07:22 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:22.960559 | orchestrator | 2025-03-27 01:07:22 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:22.962527 | orchestrator | 2025-03-27 01:07:22 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:22.964351 | orchestrator | 2025-03-27 01:07:22 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:22.965964 | orchestrator | 2025-03-27 01:07:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:25.995884 | orchestrator | 2025-03-27 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:25.996016 | orchestrator | 2025-03-27 01:07:25 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:25.997198 | orchestrator | 2025-03-27 01:07:25 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:25.998131 | orchestrator | 2025-03-27 01:07:25 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:25.999208 | orchestrator | 2025-03-27 01:07:25 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:26.000258 | orchestrator | 2025-03-27 01:07:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:26.001222 | orchestrator | 2025-03-27 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:29.036646 | orchestrator | 2025-03-27 01:07:29 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:29.038645 | orchestrator | 2025-03-27 01:07:29 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:29.038723 | orchestrator | 2025-03-27 01:07:29 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:29.039195 | orchestrator | 2025-03-27 01:07:29 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:29.044470 | orchestrator | 2025-03-27 01:07:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:29.047064 | orchestrator | 2025-03-27 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:32.087070 | orchestrator | 2025-03-27 01:07:32 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:32.088774 | orchestrator | 2025-03-27 01:07:32 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:32.090850 | orchestrator | 2025-03-27 01:07:32 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:32.092930 | orchestrator | 2025-03-27 01:07:32 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:32.095444 | orchestrator | 2025-03-27 01:07:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:35.137390 | orchestrator | 2025-03-27 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:35.137578 | orchestrator | 2025-03-27 01:07:35 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:35.140833 | orchestrator | 2025-03-27 01:07:35 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:35.142644 | orchestrator | 2025-03-27 01:07:35 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:35.144745 | orchestrator | 2025-03-27 01:07:35 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:35.147632 | orchestrator | 2025-03-27 01:07:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:38.184362 | orchestrator | 2025-03-27 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:38.184490 | orchestrator | 2025-03-27 01:07:38 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:38.185175 | orchestrator | 2025-03-27 01:07:38 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:38.185216 | orchestrator | 2025-03-27 01:07:38 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:38.186207 | orchestrator | 2025-03-27 01:07:38 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:38.187121 | orchestrator | 2025-03-27 01:07:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:41.228003 | orchestrator | 2025-03-27 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:41.228146 | orchestrator | 2025-03-27 01:07:41 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:41.228693 | orchestrator | 2025-03-27 01:07:41 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:41.229271 | orchestrator | 2025-03-27 01:07:41 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:41.229755 | orchestrator | 2025-03-27 01:07:41 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:41.231797 | orchestrator | 2025-03-27 01:07:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:44.288192 | orchestrator | 2025-03-27 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:44.288463 | orchestrator | 2025-03-27 01:07:44 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:44.290921 | orchestrator | 2025-03-27 01:07:44 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:44.290965 | orchestrator | 2025-03-27 01:07:44 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:44.292339 | orchestrator | 2025-03-27 01:07:44 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:44.293710 | orchestrator | 2025-03-27 01:07:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:47.355423 | orchestrator | 2025-03-27 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:47.355626 | orchestrator | 2025-03-27 01:07:47 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:47.359639 | orchestrator | 2025-03-27 01:07:47 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state STARTED 2025-03-27 01:07:47.360342 | orchestrator | 2025-03-27 01:07:47 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:47.361425 | orchestrator | 2025-03-27 01:07:47 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:47.365440 | orchestrator | 2025-03-27 01:07:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:50.404873 | orchestrator | 2025-03-27 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:50.404997 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:50.408396 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-27 01:07:50.408437 | orchestrator | 2025-03-27 01:07:50.408453 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-03-27 01:07:50.408469 | orchestrator | 2025-03-27 01:07:50.408484 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-03-27 01:07:50.408551 | orchestrator | Thursday 27 March 2025 01:06:04 +0000 (0:00:00.501) 0:00:00.501 ******** 2025-03-27 01:07:50.408566 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408599 | orchestrator | 2025-03-27 01:07:50.408614 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-03-27 01:07:50.408628 | orchestrator | Thursday 27 March 2025 01:06:06 +0000 (0:00:01.461) 0:00:01.963 ******** 2025-03-27 01:07:50.408642 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408656 | orchestrator | 2025-03-27 01:07:50.408670 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-03-27 01:07:50.408684 | orchestrator | Thursday 27 March 2025 01:06:07 +0000 (0:00:01.258) 0:00:03.222 ******** 2025-03-27 01:07:50.408697 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408711 | orchestrator | 2025-03-27 01:07:50.408725 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-03-27 01:07:50.408739 | orchestrator | Thursday 27 March 2025 01:06:08 +0000 (0:00:01.122) 0:00:04.344 ******** 2025-03-27 01:07:50.408752 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408766 | orchestrator | 2025-03-27 01:07:50.408780 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-03-27 01:07:50.408794 | orchestrator | Thursday 27 March 2025 01:06:09 +0000 (0:00:01.130) 0:00:05.475 ******** 2025-03-27 01:07:50.408807 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408821 | orchestrator | 2025-03-27 01:07:50.408835 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-03-27 01:07:50.408848 | orchestrator | Thursday 27 March 2025 01:06:10 +0000 (0:00:01.029) 0:00:06.505 ******** 2025-03-27 01:07:50.408861 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408875 | orchestrator | 2025-03-27 01:07:50.408914 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-03-27 01:07:50.408929 | orchestrator | Thursday 27 March 2025 01:06:11 +0000 (0:00:01.125) 0:00:07.630 ******** 2025-03-27 01:07:50.408942 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.408956 | orchestrator | 2025-03-27 01:07:50.408969 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-03-27 01:07:50.408985 | orchestrator | Thursday 27 March 2025 01:06:13 +0000 (0:00:02.285) 0:00:09.915 ******** 2025-03-27 01:07:50.409001 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.409016 | orchestrator | 2025-03-27 01:07:50.409032 | orchestrator | TASK [Create admin user] ******************************************************* 2025-03-27 01:07:50.409047 | orchestrator | Thursday 27 March 2025 01:06:15 +0000 (0:00:01.187) 0:00:11.103 ******** 2025-03-27 01:07:50.409062 | orchestrator | changed: [testbed-manager] 2025-03-27 01:07:50.409077 | orchestrator | 2025-03-27 01:07:50.409098 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-03-27 01:07:50.409114 | orchestrator | Thursday 27 March 2025 01:06:33 +0000 (0:00:18.327) 0:00:29.430 ******** 2025-03-27 01:07:50.409130 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:07:50.409146 | orchestrator | 2025-03-27 01:07:50.409162 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-27 01:07:50.409177 | orchestrator | 2025-03-27 01:07:50.409193 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-27 01:07:50.409209 | orchestrator | Thursday 27 March 2025 01:06:34 +0000 (0:00:00.769) 0:00:30.200 ******** 2025-03-27 01:07:50.409225 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.409241 | orchestrator | 2025-03-27 01:07:50.409256 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-27 01:07:50.409271 | orchestrator | 2025-03-27 01:07:50.409287 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-27 01:07:50.409303 | orchestrator | Thursday 27 March 2025 01:06:36 +0000 (0:00:02.462) 0:00:32.662 ******** 2025-03-27 01:07:50.409318 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:07:50.409334 | orchestrator | 2025-03-27 01:07:50.409347 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-03-27 01:07:50.409361 | orchestrator | 2025-03-27 01:07:50.409375 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-03-27 01:07:50.409388 | orchestrator | Thursday 27 March 2025 01:06:38 +0000 (0:00:02.014) 0:00:34.677 ******** 2025-03-27 01:07:50.409417 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:07:50.409431 | orchestrator | 2025-03-27 01:07:50.409458 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:07:50.409473 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-27 01:07:50.409488 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:07:50.409520 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:07:50.409535 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:07:50.409643 | orchestrator | 2025-03-27 01:07:50.409659 | orchestrator | 2025-03-27 01:07:50.409673 | orchestrator | 2025-03-27 01:07:50.409687 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:07:50.409701 | orchestrator | Thursday 27 March 2025 01:06:40 +0000 (0:00:01.583) 0:00:36.260 ******** 2025-03-27 01:07:50.409714 | orchestrator | =============================================================================== 2025-03-27 01:07:50.409728 | orchestrator | Create admin user ------------------------------------------------------ 18.33s 2025-03-27 01:07:50.409754 | orchestrator | Restart ceph manager service -------------------------------------------- 6.06s 2025-03-27 01:07:50.409780 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.29s 2025-03-27 01:07:50.409794 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2025-03-27 01:07:50.409808 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.26s 2025-03-27 01:07:50.409822 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-03-27 01:07:50.409836 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.13s 2025-03-27 01:07:50.409850 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2025-03-27 01:07:50.409864 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.12s 2025-03-27 01:07:50.409877 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.03s 2025-03-27 01:07:50.409891 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.77s 2025-03-27 01:07:50.409905 | orchestrator | 2025-03-27 01:07:50.409918 | orchestrator | 2025-03-27 01:07:50.409932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:07:50.409946 | orchestrator | 2025-03-27 01:07:50.409960 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:07:50.409973 | orchestrator | Thursday 27 March 2025 01:06:14 +0000 (0:00:00.894) 0:00:00.894 ******** 2025-03-27 01:07:50.409987 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:07:50.410001 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:07:50.410063 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:07:50.410081 | orchestrator | 2025-03-27 01:07:50.410095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:07:50.410109 | orchestrator | Thursday 27 March 2025 01:06:16 +0000 (0:00:01.541) 0:00:02.436 ******** 2025-03-27 01:07:50.410123 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-03-27 01:07:50.410137 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-03-27 01:07:50.410232 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-03-27 01:07:50.410254 | orchestrator | 2025-03-27 01:07:50.410268 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-03-27 01:07:50.410282 | orchestrator | 2025-03-27 01:07:50.410303 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-27 01:07:50.410317 | orchestrator | Thursday 27 March 2025 01:06:16 +0000 (0:00:00.739) 0:00:03.176 ******** 2025-03-27 01:07:50.410331 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:07:50.410345 | orchestrator | 2025-03-27 01:07:50.410359 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-03-27 01:07:50.410373 | orchestrator | Thursday 27 March 2025 01:06:18 +0000 (0:00:01.670) 0:00:04.846 ******** 2025-03-27 01:07:50.410386 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-03-27 01:07:50.410400 | orchestrator | 2025-03-27 01:07:50.410414 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-03-27 01:07:50.410427 | orchestrator | Thursday 27 March 2025 01:06:22 +0000 (0:00:03.908) 0:00:08.755 ******** 2025-03-27 01:07:50.410441 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-03-27 01:07:50.410455 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-03-27 01:07:50.410469 | orchestrator | 2025-03-27 01:07:50.410482 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-03-27 01:07:50.410516 | orchestrator | Thursday 27 March 2025 01:06:30 +0000 (0:00:07.888) 0:00:16.643 ******** 2025-03-27 01:07:50.410532 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:07:50.410545 | orchestrator | 2025-03-27 01:07:50.410559 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-03-27 01:07:50.410573 | orchestrator | Thursday 27 March 2025 01:06:34 +0000 (0:00:04.304) 0:00:20.948 ******** 2025-03-27 01:07:50.410598 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:07:50.410612 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-03-27 01:07:50.410625 | orchestrator | 2025-03-27 01:07:50.410639 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-03-27 01:07:50.410653 | orchestrator | Thursday 27 March 2025 01:06:39 +0000 (0:00:04.833) 0:00:25.781 ******** 2025-03-27 01:07:50.410667 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:07:50.410680 | orchestrator | 2025-03-27 01:07:50.410694 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-03-27 01:07:50.410708 | orchestrator | Thursday 27 March 2025 01:06:42 +0000 (0:00:03.516) 0:00:29.298 ******** 2025-03-27 01:07:50.410722 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-03-27 01:07:50.410735 | orchestrator | 2025-03-27 01:07:50.410749 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-27 01:07:50.410763 | orchestrator | Thursday 27 March 2025 01:06:48 +0000 (0:00:05.309) 0:00:34.607 ******** 2025-03-27 01:07:50.410777 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.410791 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:07:50.410804 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:07:50.410818 | orchestrator | 2025-03-27 01:07:50.410832 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-03-27 01:07:50.410846 | orchestrator | Thursday 27 March 2025 01:06:49 +0000 (0:00:01.029) 0:00:35.645 ******** 2025-03-27 01:07:50.410875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.410897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.410914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.410940 | orchestrator | 2025-03-27 01:07:50.410956 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-03-27 01:07:50.410972 | orchestrator | Thursday 27 March 2025 01:06:51 +0000 (0:00:02.663) 0:00:38.309 ******** 2025-03-27 01:07:50.410987 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.411003 | orchestrator | 2025-03-27 01:07:50.411019 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-03-27 01:07:50.411036 | orchestrator | Thursday 27 March 2025 01:06:52 +0000 (0:00:00.447) 0:00:38.757 ******** 2025-03-27 01:07:50.411052 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.411068 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:07:50.411083 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:07:50.411101 | orchestrator | 2025-03-27 01:07:50.411190 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-03-27 01:07:50.411208 | orchestrator | Thursday 27 March 2025 01:06:53 +0000 (0:00:01.205) 0:00:39.962 ******** 2025-03-27 01:07:50.411224 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:07:50.411238 | orchestrator | 2025-03-27 01:07:50.411252 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-03-27 01:07:50.411265 | orchestrator | Thursday 27 March 2025 01:06:54 +0000 (0:00:00.855) 0:00:40.818 ******** 2025-03-27 01:07:50.411289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411343 | orchestrator | 2025-03-27 01:07:50.411357 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-03-27 01:07:50.411371 | orchestrator | Thursday 27 March 2025 01:06:58 +0000 (0:00:04.057) 0:00:44.875 ******** 2025-03-27 01:07:50.411385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411400 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.411414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411434 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:07:50.411449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411463 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:07:50.411477 | orchestrator | 2025-03-27 01:07:50.411491 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-03-27 01:07:50.411537 | orchestrator | Thursday 27 March 2025 01:07:00 +0000 (0:00:01.948) 0:00:46.824 ******** 2025-03-27 01:07:50.411591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411617 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.411632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411647 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:07:50.411661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.411675 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:07:50.411688 | orchestrator | 2025-03-27 01:07:50.411709 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-03-27 01:07:50.411723 | orchestrator | Thursday 27 March 2025 01:07:02 +0000 (0:00:02.055) 0:00:48.880 ******** 2025-03-27 01:07:50.411737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411788 | orchestrator | 2025-03-27 01:07:50.411813 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-03-27 01:07:50.411827 | orchestrator | Thursday 27 March 2025 01:07:04 +0000 (0:00:02.092) 0:00:50.973 ******** 2025-03-27 01:07:50.411841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.411901 | orchestrator | 2025-03-27 01:07:50.411915 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-03-27 01:07:50.411928 | orchestrator | Thursday 27 March 2025 01:07:10 +0000 (0:00:05.490) 0:00:56.463 ******** 2025-03-27 01:07:50.411942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-27 01:07:50.411956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-27 01:07:50.411970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-03-27 01:07:50.411984 | orchestrator | 2025-03-27 01:07:50.411998 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-03-27 01:07:50.412011 | orchestrator | Thursday 27 March 2025 01:07:12 +0000 (0:00:02.106) 0:00:58.570 ******** 2025-03-27 01:07:50.412025 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.412039 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:07:50.412052 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:07:50.412066 | orchestrator | 2025-03-27 01:07:50.412079 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-03-27 01:07:50.412093 | orchestrator | Thursday 27 March 2025 01:07:13 +0000 (0:00:01.798) 0:01:00.369 ******** 2025-03-27 01:07:50.412113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.412127 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:07:50.412150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.412172 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:07:50.412192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-27 01:07:50.412206 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:07:50.412220 | orchestrator | 2025-03-27 01:07:50.412234 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-03-27 01:07:50.412248 | orchestrator | Thursday 27 March 2025 01:07:16 +0000 (0:00:02.626) 0:01:02.995 ******** 2025-03-27 01:07:50.412262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.412281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.412303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-27 01:07:50.412325 | orchestrator | 2025-03-27 01:07:50.412339 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-03-27 01:07:50.412353 | orchestrator | Thursday 27 March 2025 01:07:18 +0000 (0:00:02.358) 0:01:05.354 ******** 2025-03-27 01:07:50.412366 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.412380 | orchestrator | 2025-03-27 01:07:50.412393 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-03-27 01:07:50.412407 | orchestrator | Thursday 27 March 2025 01:07:21 +0000 (0:00:02.904) 0:01:08.259 ******** 2025-03-27 01:07:50.412420 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.412434 | orchestrator | 2025-03-27 01:07:50.412447 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-03-27 01:07:50.412461 | orchestrator | Thursday 27 March 2025 01:07:24 +0000 (0:00:02.733) 0:01:10.992 ******** 2025-03-27 01:07:50.412474 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.412488 | orchestrator | 2025-03-27 01:07:50.412521 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-27 01:07:50.412536 | orchestrator | Thursday 27 March 2025 01:07:38 +0000 (0:00:13.931) 0:01:24.924 ******** 2025-03-27 01:07:50.412549 | orchestrator | 2025-03-27 01:07:50.412563 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-27 01:07:50.412576 | orchestrator | Thursday 27 March 2025 01:07:38 +0000 (0:00:00.125) 0:01:25.050 ******** 2025-03-27 01:07:50.412590 | orchestrator | 2025-03-27 01:07:50.412604 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-03-27 01:07:50.412618 | orchestrator | Thursday 27 March 2025 01:07:39 +0000 (0:00:00.435) 0:01:25.485 ******** 2025-03-27 01:07:50.412631 | orchestrator | 2025-03-27 01:07:50.412644 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-03-27 01:07:50.412658 | orchestrator | Thursday 27 March 2025 01:07:39 +0000 (0:00:00.140) 0:01:25.625 ******** 2025-03-27 01:07:50.412671 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:07:50.412685 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:07:50.412699 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:07:50.412713 | orchestrator | 2025-03-27 01:07:50.412726 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:07:50.412740 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 01:07:50.412754 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 01:07:50.412768 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-27 01:07:50.412782 | orchestrator | 2025-03-27 01:07:50.412796 | orchestrator | 2025-03-27 01:07:50.412809 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:07:50.412823 | orchestrator | Thursday 27 March 2025 01:07:48 +0000 (0:00:09.276) 0:01:34.901 ******** 2025-03-27 01:07:50.412836 | orchestrator | =============================================================================== 2025-03-27 01:07:50.412850 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.93s 2025-03-27 01:07:50.412863 | orchestrator | placement : Restart placement-api container ----------------------------- 9.28s 2025-03-27 01:07:50.412877 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.89s 2025-03-27 01:07:50.412895 | orchestrator | placement : Copying over placement.conf --------------------------------- 5.49s 2025-03-27 01:07:50.412909 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.31s 2025-03-27 01:07:50.412923 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.83s 2025-03-27 01:07:50.412944 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.30s 2025-03-27 01:07:50.412958 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 4.06s 2025-03-27 01:07:50.412972 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.91s 2025-03-27 01:07:50.412986 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.52s 2025-03-27 01:07:50.412999 | orchestrator | placement : Creating placement databases -------------------------------- 2.90s 2025-03-27 01:07:50.413013 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.73s 2025-03-27 01:07:50.413026 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.66s 2025-03-27 01:07:50.413040 | orchestrator | placement : Copying over existing policy file --------------------------- 2.63s 2025-03-27 01:07:50.413054 | orchestrator | placement : Check placement containers ---------------------------------- 2.36s 2025-03-27 01:07:50.413067 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.11s 2025-03-27 01:07:50.413081 | orchestrator | placement : Copying over config.json files for services ----------------- 2.09s 2025-03-27 01:07:50.413094 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.06s 2025-03-27 01:07:50.413108 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.95s 2025-03-27 01:07:50.413122 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.80s 2025-03-27 01:07:50.413135 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task 41d8667b-91d5-428f-a6fb-3f812e0c4588 is in state SUCCESS 2025-03-27 01:07:50.413154 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:53.456109 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:07:53.456234 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:53.456252 | orchestrator | 2025-03-27 01:07:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:53.456267 | orchestrator | 2025-03-27 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:53.456424 | orchestrator | 2025-03-27 01:07:53 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:53.456457 | orchestrator | 2025-03-27 01:07:53 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:53.456970 | orchestrator | 2025-03-27 01:07:53 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:07:53.457752 | orchestrator | 2025-03-27 01:07:53 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:53.458230 | orchestrator | 2025-03-27 01:07:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:56.487991 | orchestrator | 2025-03-27 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:56.488227 | orchestrator | 2025-03-27 01:07:56 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:56.488934 | orchestrator | 2025-03-27 01:07:56 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:56.488975 | orchestrator | 2025-03-27 01:07:56 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:07:56.489654 | orchestrator | 2025-03-27 01:07:56 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:56.490149 | orchestrator | 2025-03-27 01:07:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:59.529145 | orchestrator | 2025-03-27 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:07:59.529295 | orchestrator | 2025-03-27 01:07:59 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:07:59.532309 | orchestrator | 2025-03-27 01:07:59 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:07:59.536557 | orchestrator | 2025-03-27 01:07:59 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:07:59.538211 | orchestrator | 2025-03-27 01:07:59 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state STARTED 2025-03-27 01:07:59.539681 | orchestrator | 2025-03-27 01:07:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:07:59.540143 | orchestrator | 2025-03-27 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:02.595678 | orchestrator | 2025-03-27 01:08:02 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:02.597365 | orchestrator | 2025-03-27 01:08:02 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:02.599883 | orchestrator | 2025-03-27 01:08:02 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:02.602238 | orchestrator | 2025-03-27 01:08:02.603681 | orchestrator | 2025-03-27 01:08:02 | INFO  | Task 1d2068c4-d0f7-4dad-ad4a-f62db440dd6f is in state SUCCESS 2025-03-27 01:08:02.603731 | orchestrator | 2025-03-27 01:08:02.603747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:08:02.603761 | orchestrator | 2025-03-27 01:08:02.603776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:08:02.604066 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.362) 0:00:00.362 ******** 2025-03-27 01:08:02.604083 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:08:02.604099 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:08:02.604114 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:08:02.604128 | orchestrator | 2025-03-27 01:08:02.604142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:08:02.604157 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.458) 0:00:00.821 ******** 2025-03-27 01:08:02.604171 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-03-27 01:08:02.604185 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-03-27 01:08:02.604199 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-03-27 01:08:02.604212 | orchestrator | 2025-03-27 01:08:02.604226 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-03-27 01:08:02.604240 | orchestrator | 2025-03-27 01:08:02.604254 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-27 01:08:02.604268 | orchestrator | Thursday 27 March 2025 01:05:32 +0000 (0:00:00.362) 0:00:01.183 ******** 2025-03-27 01:08:02.604281 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:08:02.604296 | orchestrator | 2025-03-27 01:08:02.604310 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-03-27 01:08:02.604324 | orchestrator | Thursday 27 March 2025 01:05:33 +0000 (0:00:00.914) 0:00:02.098 ******** 2025-03-27 01:08:02.604338 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-03-27 01:08:02.604353 | orchestrator | 2025-03-27 01:08:02.604367 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-03-27 01:08:02.604381 | orchestrator | Thursday 27 March 2025 01:05:36 +0000 (0:00:03.521) 0:00:05.619 ******** 2025-03-27 01:08:02.604395 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-03-27 01:08:02.604425 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-03-27 01:08:02.604463 | orchestrator | 2025-03-27 01:08:02.604478 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-03-27 01:08:02.604491 | orchestrator | Thursday 27 March 2025 01:05:43 +0000 (0:00:07.283) 0:00:12.903 ******** 2025-03-27 01:08:02.604529 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-03-27 01:08:02.604543 | orchestrator | 2025-03-27 01:08:02.604557 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-03-27 01:08:02.604577 | orchestrator | Thursday 27 March 2025 01:05:47 +0000 (0:00:03.864) 0:00:16.768 ******** 2025-03-27 01:08:02.604591 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:08:02.604741 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-03-27 01:08:02.604757 | orchestrator | 2025-03-27 01:08:02.604772 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-03-27 01:08:02.604786 | orchestrator | Thursday 27 March 2025 01:05:52 +0000 (0:00:04.340) 0:00:21.108 ******** 2025-03-27 01:08:02.604800 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:08:02.604815 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-03-27 01:08:02.604829 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-03-27 01:08:02.604844 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-03-27 01:08:02.604858 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-03-27 01:08:02.604872 | orchestrator | 2025-03-27 01:08:02.604886 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-03-27 01:08:02.604900 | orchestrator | Thursday 27 March 2025 01:06:10 +0000 (0:00:18.681) 0:00:39.790 ******** 2025-03-27 01:08:02.604915 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-03-27 01:08:02.604929 | orchestrator | 2025-03-27 01:08:02.604943 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-03-27 01:08:02.604957 | orchestrator | Thursday 27 March 2025 01:06:17 +0000 (0:00:06.212) 0:00:46.002 ******** 2025-03-27 01:08:02.604974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605204 | orchestrator | 2025-03-27 01:08:02.605219 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-03-27 01:08:02.605235 | orchestrator | Thursday 27 March 2025 01:06:20 +0000 (0:00:03.092) 0:00:49.095 ******** 2025-03-27 01:08:02.605251 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-03-27 01:08:02.605266 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-03-27 01:08:02.605281 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-03-27 01:08:02.605295 | orchestrator | 2025-03-27 01:08:02.605309 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-03-27 01:08:02.605324 | orchestrator | Thursday 27 March 2025 01:06:23 +0000 (0:00:03.086) 0:00:52.181 ******** 2025-03-27 01:08:02.605338 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.605353 | orchestrator | 2025-03-27 01:08:02.605367 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-03-27 01:08:02.605384 | orchestrator | Thursday 27 March 2025 01:06:23 +0000 (0:00:00.142) 0:00:52.324 ******** 2025-03-27 01:08:02.605401 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.605418 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.605435 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.605451 | orchestrator | 2025-03-27 01:08:02.605468 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-27 01:08:02.605485 | orchestrator | Thursday 27 March 2025 01:06:23 +0000 (0:00:00.460) 0:00:52.784 ******** 2025-03-27 01:08:02.605522 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:08:02.605539 | orchestrator | 2025-03-27 01:08:02.605556 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-03-27 01:08:02.605572 | orchestrator | Thursday 27 March 2025 01:06:24 +0000 (0:00:00.799) 0:00:53.584 ******** 2025-03-27 01:08:02.605591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.605671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.605790 | orchestrator | 2025-03-27 01:08:02.605805 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-03-27 01:08:02.605820 | orchestrator | Thursday 27 March 2025 01:06:29 +0000 (0:00:05.059) 0:00:58.644 ******** 2025-03-27 01:08:02.605834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.605855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.605878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.605902 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.605917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.605933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.605948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.605964 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.605983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.606013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606096 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.606111 | orchestrator | 2025-03-27 01:08:02.606125 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-03-27 01:08:02.606139 | orchestrator | Thursday 27 March 2025 01:06:31 +0000 (0:00:01.330) 0:00:59.975 ******** 2025-03-27 01:08:02.606153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.606168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606216 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.606238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.606254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606282 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.606297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.606317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.606353 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.606367 | orchestrator | 2025-03-27 01:08:02.606381 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-03-27 01:08:02.606400 | orchestrator | Thursday 27 March 2025 01:06:32 +0000 (0:00:01.574) 0:01:01.549 ******** 2025-03-27 01:08:02.606415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606610 | orchestrator | 2025-03-27 01:08:02.606625 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-03-27 01:08:02.606639 | orchestrator | Thursday 27 March 2025 01:06:37 +0000 (0:00:05.157) 0:01:06.706 ******** 2025-03-27 01:08:02.606653 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.606667 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:08:02.606681 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:08:02.606695 | orchestrator | 2025-03-27 01:08:02.606709 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-03-27 01:08:02.606722 | orchestrator | Thursday 27 March 2025 01:06:41 +0000 (0:00:03.846) 0:01:10.553 ******** 2025-03-27 01:08:02.606736 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:08:02.606750 | orchestrator | 2025-03-27 01:08:02.606764 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-03-27 01:08:02.606777 | orchestrator | Thursday 27 March 2025 01:06:43 +0000 (0:00:01.392) 0:01:11.945 ******** 2025-03-27 01:08:02.606791 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.606805 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.606819 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.606832 | orchestrator | 2025-03-27 01:08:02.606846 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-03-27 01:08:02.606860 | orchestrator | Thursday 27 March 2025 01:06:45 +0000 (0:00:02.645) 0:01:14.591 ******** 2025-03-27 01:08:02.606883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.606940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.606991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607041 | orchestrator | 2025-03-27 01:08:02.607055 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-03-27 01:08:02.607074 | orchestrator | Thursday 27 March 2025 01:07:00 +0000 (0:00:15.218) 0:01:29.810 ******** 2025-03-27 01:08:02.607093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.607115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607144 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.607158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.607180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607214 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.607235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-27 01:08:02.607250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:08:02.607291 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.607305 | orchestrator | 2025-03-27 01:08:02.607319 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-03-27 01:08:02.607333 | orchestrator | Thursday 27 March 2025 01:07:02 +0000 (0:00:01.288) 0:01:31.098 ******** 2025-03-27 01:08:02.607352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.607368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.607389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-27 01:08:02.607409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:08:02.607531 | orchestrator | 2025-03-27 01:08:02.607545 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-03-27 01:08:02.607559 | orchestrator | Thursday 27 March 2025 01:07:06 +0000 (0:00:04.130) 0:01:35.228 ******** 2025-03-27 01:08:02.607573 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:08:02.607587 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:08:02.607601 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:08:02.607615 | orchestrator | 2025-03-27 01:08:02.607629 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-03-27 01:08:02.607650 | orchestrator | Thursday 27 March 2025 01:07:07 +0000 (0:00:00.726) 0:01:35.955 ******** 2025-03-27 01:08:02.607664 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.607678 | orchestrator | 2025-03-27 01:08:02.607691 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-03-27 01:08:02.607705 | orchestrator | Thursday 27 March 2025 01:07:10 +0000 (0:00:03.116) 0:01:39.072 ******** 2025-03-27 01:08:02.607718 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.607732 | orchestrator | 2025-03-27 01:08:02.607746 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-03-27 01:08:02.607760 | orchestrator | Thursday 27 March 2025 01:07:12 +0000 (0:00:02.659) 0:01:41.731 ******** 2025-03-27 01:08:02.607774 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.607787 | orchestrator | 2025-03-27 01:08:02.607801 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-27 01:08:02.607815 | orchestrator | Thursday 27 March 2025 01:07:25 +0000 (0:00:12.279) 0:01:54.010 ******** 2025-03-27 01:08:02.607828 | orchestrator | 2025-03-27 01:08:02.607842 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-27 01:08:02.607856 | orchestrator | Thursday 27 March 2025 01:07:25 +0000 (0:00:00.145) 0:01:54.156 ******** 2025-03-27 01:08:02.607869 | orchestrator | 2025-03-27 01:08:02.607883 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-03-27 01:08:02.607896 | orchestrator | Thursday 27 March 2025 01:07:25 +0000 (0:00:00.419) 0:01:54.575 ******** 2025-03-27 01:08:02.607910 | orchestrator | 2025-03-27 01:08:02.607924 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-03-27 01:08:02.607937 | orchestrator | Thursday 27 March 2025 01:07:25 +0000 (0:00:00.063) 0:01:54.639 ******** 2025-03-27 01:08:02.607951 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.607965 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:08:02.607979 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:08:02.607993 | orchestrator | 2025-03-27 01:08:02.608007 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-03-27 01:08:02.608020 | orchestrator | Thursday 27 March 2025 01:07:37 +0000 (0:00:11.753) 0:02:06.393 ******** 2025-03-27 01:08:02.608034 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.608048 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:08:02.608061 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:08:02.608075 | orchestrator | 2025-03-27 01:08:02.608088 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-03-27 01:08:02.608102 | orchestrator | Thursday 27 March 2025 01:07:48 +0000 (0:00:10.954) 0:02:17.347 ******** 2025-03-27 01:08:02.608116 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:08:02.608130 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:08:02.608143 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:08:02.608157 | orchestrator | 2025-03-27 01:08:02.608170 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:08:02.608184 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:08:02.608199 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 01:08:02.608213 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 01:08:02.608227 | orchestrator | 2025-03-27 01:08:02.608241 | orchestrator | 2025-03-27 01:08:02.608255 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:08:02.608268 | orchestrator | Thursday 27 March 2025 01:08:02 +0000 (0:00:13.678) 0:02:31.026 ******** 2025-03-27 01:08:02.608282 | orchestrator | =============================================================================== 2025-03-27 01:08:02.608296 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.68s 2025-03-27 01:08:02.608316 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 15.22s 2025-03-27 01:08:02.608329 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.68s 2025-03-27 01:08:02.608343 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.28s 2025-03-27 01:08:02.608357 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.75s 2025-03-27 01:08:02.608376 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.95s 2025-03-27 01:08:02.608391 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.28s 2025-03-27 01:08:02.608409 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 6.21s 2025-03-27 01:08:05.654811 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.16s 2025-03-27 01:08:05.655033 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.06s 2025-03-27 01:08:05.655056 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.34s 2025-03-27 01:08:05.655853 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.13s 2025-03-27 01:08:05.655932 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.86s 2025-03-27 01:08:05.655950 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.85s 2025-03-27 01:08:05.655965 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2025-03-27 01:08:05.655979 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.12s 2025-03-27 01:08:05.655993 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.09s 2025-03-27 01:08:05.656007 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.09s 2025-03-27 01:08:05.656020 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.66s 2025-03-27 01:08:05.656034 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.65s 2025-03-27 01:08:05.656049 | orchestrator | 2025-03-27 01:08:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:05.656063 | orchestrator | 2025-03-27 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:05.656097 | orchestrator | 2025-03-27 01:08:05 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:05.656683 | orchestrator | 2025-03-27 01:08:05 | INFO  | Task 41a959bd-d2fb-469a-bc23-d98536d73612 is in state STARTED 2025-03-27 01:08:05.656734 | orchestrator | 2025-03-27 01:08:05 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:05.656766 | orchestrator | 2025-03-27 01:08:05 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:05.657493 | orchestrator | 2025-03-27 01:08:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:08.690882 | orchestrator | 2025-03-27 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:08.691015 | orchestrator | 2025-03-27 01:08:08 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:08.691349 | orchestrator | 2025-03-27 01:08:08 | INFO  | Task 41a959bd-d2fb-469a-bc23-d98536d73612 is in state STARTED 2025-03-27 01:08:08.692355 | orchestrator | 2025-03-27 01:08:08 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:08.693943 | orchestrator | 2025-03-27 01:08:08 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:08.696843 | orchestrator | 2025-03-27 01:08:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:11.730904 | orchestrator | 2025-03-27 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:11.731047 | orchestrator | 2025-03-27 01:08:11 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:11.731799 | orchestrator | 2025-03-27 01:08:11 | INFO  | Task 41a959bd-d2fb-469a-bc23-d98536d73612 is in state STARTED 2025-03-27 01:08:11.732396 | orchestrator | 2025-03-27 01:08:11 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:11.733894 | orchestrator | 2025-03-27 01:08:11 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:14.772721 | orchestrator | 2025-03-27 01:08:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:14.772830 | orchestrator | 2025-03-27 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:14.772864 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:14.773385 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:14.774473 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task 41a959bd-d2fb-469a-bc23-d98536d73612 is in state SUCCESS 2025-03-27 01:08:14.776382 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:14.777386 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:14.779587 | orchestrator | 2025-03-27 01:08:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:17.822064 | orchestrator | 2025-03-27 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:17.822199 | orchestrator | 2025-03-27 01:08:17 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:17.826212 | orchestrator | 2025-03-27 01:08:17 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:17.827389 | orchestrator | 2025-03-27 01:08:17 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:17.827418 | orchestrator | 2025-03-27 01:08:17 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:17.828657 | orchestrator | 2025-03-27 01:08:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:17.828818 | orchestrator | 2025-03-27 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:20.869026 | orchestrator | 2025-03-27 01:08:20 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:20.869335 | orchestrator | 2025-03-27 01:08:20 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:20.870198 | orchestrator | 2025-03-27 01:08:20 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:20.871247 | orchestrator | 2025-03-27 01:08:20 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:20.871749 | orchestrator | 2025-03-27 01:08:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:23.902739 | orchestrator | 2025-03-27 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:23.902876 | orchestrator | 2025-03-27 01:08:23 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:23.903182 | orchestrator | 2025-03-27 01:08:23 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:23.903215 | orchestrator | 2025-03-27 01:08:23 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:23.903835 | orchestrator | 2025-03-27 01:08:23 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:23.904227 | orchestrator | 2025-03-27 01:08:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:26.961882 | orchestrator | 2025-03-27 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:26.962201 | orchestrator | 2025-03-27 01:08:26 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:26.963147 | orchestrator | 2025-03-27 01:08:26 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:26.963178 | orchestrator | 2025-03-27 01:08:26 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:26.963199 | orchestrator | 2025-03-27 01:08:26 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:26.963681 | orchestrator | 2025-03-27 01:08:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:26.966433 | orchestrator | 2025-03-27 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:30.010086 | orchestrator | 2025-03-27 01:08:30 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:30.011880 | orchestrator | 2025-03-27 01:08:30 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:30.019084 | orchestrator | 2025-03-27 01:08:30 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:30.019617 | orchestrator | 2025-03-27 01:08:30 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:30.020759 | orchestrator | 2025-03-27 01:08:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:30.020800 | orchestrator | 2025-03-27 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:33.058842 | orchestrator | 2025-03-27 01:08:33 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:36.090551 | orchestrator | 2025-03-27 01:08:33 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:36.090766 | orchestrator | 2025-03-27 01:08:33 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:36.090790 | orchestrator | 2025-03-27 01:08:33 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:36.090806 | orchestrator | 2025-03-27 01:08:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:36.090823 | orchestrator | 2025-03-27 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:36.090858 | orchestrator | 2025-03-27 01:08:36 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:36.092796 | orchestrator | 2025-03-27 01:08:36 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:36.092825 | orchestrator | 2025-03-27 01:08:36 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:36.092847 | orchestrator | 2025-03-27 01:08:36 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:39.127739 | orchestrator | 2025-03-27 01:08:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:39.127857 | orchestrator | 2025-03-27 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:39.127893 | orchestrator | 2025-03-27 01:08:39 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:39.128269 | orchestrator | 2025-03-27 01:08:39 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:39.128306 | orchestrator | 2025-03-27 01:08:39 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:39.129749 | orchestrator | 2025-03-27 01:08:39 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:39.130213 | orchestrator | 2025-03-27 01:08:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:42.160414 | orchestrator | 2025-03-27 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:42.160574 | orchestrator | 2025-03-27 01:08:42 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:42.161137 | orchestrator | 2025-03-27 01:08:42 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:42.161940 | orchestrator | 2025-03-27 01:08:42 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:42.162601 | orchestrator | 2025-03-27 01:08:42 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:42.163235 | orchestrator | 2025-03-27 01:08:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:45.191717 | orchestrator | 2025-03-27 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:45.191951 | orchestrator | 2025-03-27 01:08:45 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:45.193246 | orchestrator | 2025-03-27 01:08:45 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:45.193279 | orchestrator | 2025-03-27 01:08:45 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:45.194213 | orchestrator | 2025-03-27 01:08:45 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:45.195581 | orchestrator | 2025-03-27 01:08:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:48.224103 | orchestrator | 2025-03-27 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:48.224239 | orchestrator | 2025-03-27 01:08:48 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:48.226346 | orchestrator | 2025-03-27 01:08:48 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:51.261977 | orchestrator | 2025-03-27 01:08:48 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:51.262244 | orchestrator | 2025-03-27 01:08:48 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:51.262268 | orchestrator | 2025-03-27 01:08:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:51.262284 | orchestrator | 2025-03-27 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:51.262315 | orchestrator | 2025-03-27 01:08:51 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:51.263692 | orchestrator | 2025-03-27 01:08:51 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:51.263732 | orchestrator | 2025-03-27 01:08:51 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:51.264146 | orchestrator | 2025-03-27 01:08:51 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:51.264176 | orchestrator | 2025-03-27 01:08:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:54.310674 | orchestrator | 2025-03-27 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:54.310840 | orchestrator | 2025-03-27 01:08:54 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:54.311448 | orchestrator | 2025-03-27 01:08:54 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:54.312377 | orchestrator | 2025-03-27 01:08:54 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:54.313672 | orchestrator | 2025-03-27 01:08:54 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:54.315388 | orchestrator | 2025-03-27 01:08:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:08:54.315421 | orchestrator | 2025-03-27 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:08:57.358287 | orchestrator | 2025-03-27 01:08:57 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:08:57.358822 | orchestrator | 2025-03-27 01:08:57 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:08:57.359886 | orchestrator | 2025-03-27 01:08:57 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:08:57.361034 | orchestrator | 2025-03-27 01:08:57 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:08:57.362125 | orchestrator | 2025-03-27 01:08:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:00.393324 | orchestrator | 2025-03-27 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:00.393456 | orchestrator | 2025-03-27 01:09:00 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:00.395641 | orchestrator | 2025-03-27 01:09:00 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:00.395892 | orchestrator | 2025-03-27 01:09:00 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state STARTED 2025-03-27 01:09:00.395922 | orchestrator | 2025-03-27 01:09:00 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:00.395943 | orchestrator | 2025-03-27 01:09:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:03.432973 | orchestrator | 2025-03-27 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:03.433105 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:03.434448 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:03.435071 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:03.437360 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task 3ce31980-1041-4c60-ac6f-d8110fa2f2db is in state SUCCESS 2025-03-27 01:09:03.438985 | orchestrator | 2025-03-27 01:09:03.439093 | orchestrator | 2025-03-27 01:09:03.439112 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:09:03.439128 | orchestrator | 2025-03-27 01:09:03.439393 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:09:03.439417 | orchestrator | Thursday 27 March 2025 01:08:07 +0000 (0:00:00.491) 0:00:00.491 ******** 2025-03-27 01:09:03.439432 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:09:03.439449 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:09:03.439464 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:09:03.439479 | orchestrator | 2025-03-27 01:09:03.439544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:09:03.439560 | orchestrator | Thursday 27 March 2025 01:08:08 +0000 (0:00:00.686) 0:00:01.177 ******** 2025-03-27 01:09:03.439600 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-03-27 01:09:03.439616 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-03-27 01:09:03.439630 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-03-27 01:09:03.439676 | orchestrator | 2025-03-27 01:09:03.439692 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-03-27 01:09:03.439706 | orchestrator | 2025-03-27 01:09:03.439720 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-03-27 01:09:03.439734 | orchestrator | Thursday 27 March 2025 01:08:09 +0000 (0:00:01.434) 0:00:02.612 ******** 2025-03-27 01:09:03.440334 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:09:03.440358 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:09:03.440372 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:09:03.440387 | orchestrator | 2025-03-27 01:09:03.440401 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:09:03.440416 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:09:03.440432 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:09:03.440459 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:09:03.440474 | orchestrator | 2025-03-27 01:09:03.440488 | orchestrator | 2025-03-27 01:09:03.440526 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:09:03.440541 | orchestrator | Thursday 27 March 2025 01:08:11 +0000 (0:00:01.522) 0:00:04.135 ******** 2025-03-27 01:09:03.440555 | orchestrator | =============================================================================== 2025-03-27 01:09:03.440570 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.52s 2025-03-27 01:09:03.440583 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.43s 2025-03-27 01:09:03.440597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-03-27 01:09:03.440611 | orchestrator | 2025-03-27 01:09:03.440625 | orchestrator | 2025-03-27 01:09:03.440639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:09:03.440653 | orchestrator | 2025-03-27 01:09:03.440672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:09:03.440686 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.370) 0:00:00.370 ******** 2025-03-27 01:09:03.440700 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:09:03.440722 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:09:03.440738 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:09:03.440752 | orchestrator | 2025-03-27 01:09:03.440766 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:09:03.440780 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.424) 0:00:00.794 ******** 2025-03-27 01:09:03.440794 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-03-27 01:09:03.440808 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-03-27 01:09:03.440822 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-03-27 01:09:03.440836 | orchestrator | 2025-03-27 01:09:03.440850 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-03-27 01:09:03.440864 | orchestrator | 2025-03-27 01:09:03.440878 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-27 01:09:03.440892 | orchestrator | Thursday 27 March 2025 01:05:32 +0000 (0:00:00.370) 0:00:01.165 ******** 2025-03-27 01:09:03.440906 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:09:03.440920 | orchestrator | 2025-03-27 01:09:03.440934 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-03-27 01:09:03.440948 | orchestrator | Thursday 27 March 2025 01:05:33 +0000 (0:00:00.934) 0:00:02.100 ******** 2025-03-27 01:09:03.440974 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-03-27 01:09:03.440991 | orchestrator | 2025-03-27 01:09:03.441007 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-03-27 01:09:03.441023 | orchestrator | Thursday 27 March 2025 01:05:37 +0000 (0:00:04.057) 0:00:06.157 ******** 2025-03-27 01:09:03.441039 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-03-27 01:09:03.441056 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-03-27 01:09:03.441071 | orchestrator | 2025-03-27 01:09:03.441088 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-03-27 01:09:03.441104 | orchestrator | Thursday 27 March 2025 01:05:45 +0000 (0:00:07.822) 0:00:13.979 ******** 2025-03-27 01:09:03.441120 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:09:03.441136 | orchestrator | 2025-03-27 01:09:03.441152 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-03-27 01:09:03.441167 | orchestrator | Thursday 27 March 2025 01:05:48 +0000 (0:00:03.848) 0:00:17.827 ******** 2025-03-27 01:09:03.441276 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:09:03.441298 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-03-27 01:09:03.441315 | orchestrator | 2025-03-27 01:09:03.441330 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-03-27 01:09:03.441344 | orchestrator | Thursday 27 March 2025 01:05:53 +0000 (0:00:04.414) 0:00:22.242 ******** 2025-03-27 01:09:03.441358 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:09:03.441372 | orchestrator | 2025-03-27 01:09:03.441386 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-03-27 01:09:03.441400 | orchestrator | Thursday 27 March 2025 01:05:56 +0000 (0:00:03.656) 0:00:25.899 ******** 2025-03-27 01:09:03.441414 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-03-27 01:09:03.441427 | orchestrator | 2025-03-27 01:09:03.441441 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-03-27 01:09:03.441455 | orchestrator | Thursday 27 March 2025 01:06:01 +0000 (0:00:04.611) 0:00:30.511 ******** 2025-03-27 01:09:03.441471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.441498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.441560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.441576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.441915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.441947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.441967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.441988 | orchestrator | 2025-03-27 01:09:03.442003 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-03-27 01:09:03.442065 | orchestrator | Thursday 27 March 2025 01:06:04 +0000 (0:00:03.401) 0:00:33.912 ******** 2025-03-27 01:09:03.442083 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.442098 | orchestrator | 2025-03-27 01:09:03.442112 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-03-27 01:09:03.442125 | orchestrator | Thursday 27 March 2025 01:06:05 +0000 (0:00:00.131) 0:00:34.043 ******** 2025-03-27 01:09:03.442139 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.442153 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.442166 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.442180 | orchestrator | 2025-03-27 01:09:03.442194 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-27 01:09:03.442208 | orchestrator | Thursday 27 March 2025 01:06:05 +0000 (0:00:00.567) 0:00:34.611 ******** 2025-03-27 01:09:03.442222 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:09:03.442236 | orchestrator | 2025-03-27 01:09:03.442250 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-03-27 01:09:03.442264 | orchestrator | Thursday 27 March 2025 01:06:06 +0000 (0:00:00.666) 0:00:35.278 ******** 2025-03-27 01:09:03.442278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.442330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.442348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.442376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.442720 | orchestrator | 2025-03-27 01:09:03.442734 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-03-27 01:09:03.442748 | orchestrator | Thursday 27 March 2025 01:06:13 +0000 (0:00:07.041) 0:00:42.319 ******** 2025-03-27 01:09:03.442762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.442808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.442825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.442852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.442881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.442998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443027 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.443041 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.443055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.443099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.443116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443187 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.443201 | orchestrator | 2025-03-27 01:09:03.443215 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-03-27 01:09:03.443234 | orchestrator | Thursday 27 March 2025 01:06:16 +0000 (0:00:02.741) 0:00:45.061 ******** 2025-03-27 01:09:03.443249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.443293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.443317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443381 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.443395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.443410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.443471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443552 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.443566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.443580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.443637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.443703 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.443717 | orchestrator | 2025-03-27 01:09:03.443731 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-03-27 01:09:03.443745 | orchestrator | Thursday 27 March 2025 01:06:19 +0000 (0:00:03.064) 0:00:48.125 ******** 2025-03-27 01:09:03.443760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.443811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.443833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.443848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.443993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444207 | orchestrator | 2025-03-27 01:09:03.444221 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-03-27 01:09:03.444235 | orchestrator | Thursday 27 March 2025 01:06:26 +0000 (0:00:07.353) 0:00:55.479 ******** 2025-03-27 01:09:03.444281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.444298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.444313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.444328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.444741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.444756 | orchestrator | 2025-03-27 01:09:03.444770 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-03-27 01:09:03.444784 | orchestrator | Thursday 27 March 2025 01:06:53 +0000 (0:00:27.425) 0:01:22.904 ******** 2025-03-27 01:09:03.444798 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-27 01:09:03.444812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-27 01:09:03.444826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-03-27 01:09:03.444840 | orchestrator | 2025-03-27 01:09:03.444854 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-03-27 01:09:03.444868 | orchestrator | Thursday 27 March 2025 01:07:04 +0000 (0:00:10.711) 0:01:33.615 ******** 2025-03-27 01:09:03.444889 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-27 01:09:03.444903 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-27 01:09:03.444923 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-03-27 01:09:03.444938 | orchestrator | 2025-03-27 01:09:03.444952 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-03-27 01:09:03.444965 | orchestrator | Thursday 27 March 2025 01:07:10 +0000 (0:00:06.045) 0:01:39.661 ******** 2025-03-27 01:09:03.444980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445363 | orchestrator | 2025-03-27 01:09:03.445377 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-03-27 01:09:03.445391 | orchestrator | Thursday 27 March 2025 01:07:14 +0000 (0:00:03.743) 0:01:43.405 ******** 2025-03-27 01:09:03.445411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.445852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.445865 | orchestrator | 2025-03-27 01:09:03.445877 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-27 01:09:03.445890 | orchestrator | Thursday 27 March 2025 01:07:18 +0000 (0:00:04.191) 0:01:47.597 ******** 2025-03-27 01:09:03.445902 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.445915 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.445927 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.445940 | orchestrator | 2025-03-27 01:09:03.445952 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-03-27 01:09:03.445964 | orchestrator | Thursday 27 March 2025 01:07:19 +0000 (0:00:00.977) 0:01:48.574 ******** 2025-03-27 01:09:03.445985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.445998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.446011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446134 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.446147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.446160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.446179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446259 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.446272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-27 01:09:03.446289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-27 01:09:03.446316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446381 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.446394 | orchestrator | 2025-03-27 01:09:03.446406 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-03-27 01:09:03.446418 | orchestrator | Thursday 27 March 2025 01:07:20 +0000 (0:00:01.126) 0:01:49.701 ******** 2025-03-27 01:09:03.446436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.446456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.446477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-27 01:09:03.446490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-03-27 01:09:03.446791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-27 01:09:03.446804 | orchestrator | 2025-03-27 01:09:03.446817 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-03-27 01:09:03.446830 | orchestrator | Thursday 27 March 2025 01:07:26 +0000 (0:00:06.208) 0:01:55.909 ******** 2025-03-27 01:09:03.446842 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:09:03.446855 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:09:03.446867 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:09:03.446879 | orchestrator | 2025-03-27 01:09:03.446891 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-03-27 01:09:03.446904 | orchestrator | Thursday 27 March 2025 01:07:27 +0000 (0:00:00.775) 0:01:56.684 ******** 2025-03-27 01:09:03.446916 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-03-27 01:09:03.446929 | orchestrator | 2025-03-27 01:09:03.446941 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-03-27 01:09:03.446953 | orchestrator | Thursday 27 March 2025 01:07:30 +0000 (0:00:02.596) 0:01:59.281 ******** 2025-03-27 01:09:03.446966 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:09:03.446978 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-03-27 01:09:03.446990 | orchestrator | 2025-03-27 01:09:03.447003 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-03-27 01:09:03.447015 | orchestrator | Thursday 27 March 2025 01:07:33 +0000 (0:00:02.709) 0:02:01.991 ******** 2025-03-27 01:09:03.447027 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447040 | orchestrator | 2025-03-27 01:09:03.447052 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-27 01:09:03.447064 | orchestrator | Thursday 27 March 2025 01:07:48 +0000 (0:00:15.761) 0:02:17.753 ******** 2025-03-27 01:09:03.447076 | orchestrator | 2025-03-27 01:09:03.447088 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-27 01:09:03.447101 | orchestrator | Thursday 27 March 2025 01:07:48 +0000 (0:00:00.082) 0:02:17.835 ******** 2025-03-27 01:09:03.447113 | orchestrator | 2025-03-27 01:09:03.447125 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-03-27 01:09:03.447142 | orchestrator | Thursday 27 March 2025 01:07:49 +0000 (0:00:00.124) 0:02:17.960 ******** 2025-03-27 01:09:03.447161 | orchestrator | 2025-03-27 01:09:03.447173 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-03-27 01:09:03.447185 | orchestrator | Thursday 27 March 2025 01:07:49 +0000 (0:00:00.144) 0:02:18.105 ******** 2025-03-27 01:09:03.447197 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447210 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447222 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447234 | orchestrator | 2025-03-27 01:09:03.447246 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-03-27 01:09:03.447258 | orchestrator | Thursday 27 March 2025 01:08:04 +0000 (0:00:15.634) 0:02:33.739 ******** 2025-03-27 01:09:03.447270 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447283 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447295 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447308 | orchestrator | 2025-03-27 01:09:03.447320 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-03-27 01:09:03.447332 | orchestrator | Thursday 27 March 2025 01:08:12 +0000 (0:00:08.015) 0:02:41.755 ******** 2025-03-27 01:09:03.447344 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447357 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447369 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447381 | orchestrator | 2025-03-27 01:09:03.447394 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-03-27 01:09:03.447406 | orchestrator | Thursday 27 March 2025 01:08:25 +0000 (0:00:12.787) 0:02:54.542 ******** 2025-03-27 01:09:03.447418 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447431 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447443 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447455 | orchestrator | 2025-03-27 01:09:03.447467 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-03-27 01:09:03.447479 | orchestrator | Thursday 27 March 2025 01:08:34 +0000 (0:00:09.296) 0:03:03.838 ******** 2025-03-27 01:09:03.447491 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447518 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447531 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447544 | orchestrator | 2025-03-27 01:09:03.447556 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-03-27 01:09:03.447568 | orchestrator | Thursday 27 March 2025 01:08:42 +0000 (0:00:08.076) 0:03:11.915 ******** 2025-03-27 01:09:03.447580 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447593 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:09:03.447605 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:09:03.447617 | orchestrator | 2025-03-27 01:09:03.447629 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-03-27 01:09:03.447641 | orchestrator | Thursday 27 March 2025 01:08:52 +0000 (0:00:09.724) 0:03:21.639 ******** 2025-03-27 01:09:03.447653 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:09:03.447666 | orchestrator | 2025-03-27 01:09:03.447678 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:09:03.447695 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:09:06.487363 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 01:09:06.487471 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-27 01:09:06.487487 | orchestrator | 2025-03-27 01:09:06.487502 | orchestrator | 2025-03-27 01:09:06.487560 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:09:06.487576 | orchestrator | Thursday 27 March 2025 01:08:59 +0000 (0:00:06.768) 0:03:28.408 ******** 2025-03-27 01:09:06.487590 | orchestrator | =============================================================================== 2025-03-27 01:09:06.487628 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.43s 2025-03-27 01:09:06.487643 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.76s 2025-03-27 01:09:06.487657 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.63s 2025-03-27 01:09:06.487670 | orchestrator | designate : Restart designate-central container ------------------------ 12.79s 2025-03-27 01:09:06.487684 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.71s 2025-03-27 01:09:06.487698 | orchestrator | designate : Restart designate-worker container -------------------------- 9.72s 2025-03-27 01:09:06.487712 | orchestrator | designate : Restart designate-producer container ------------------------ 9.30s 2025-03-27 01:09:06.487726 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.08s 2025-03-27 01:09:06.487740 | orchestrator | designate : Restart designate-api container ----------------------------- 8.02s 2025-03-27 01:09:06.487753 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.82s 2025-03-27 01:09:06.487767 | orchestrator | designate : Copying over config.json files for services ----------------- 7.35s 2025-03-27 01:09:06.487781 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.04s 2025-03-27 01:09:06.487796 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.77s 2025-03-27 01:09:06.487811 | orchestrator | designate : Check designate containers ---------------------------------- 6.21s 2025-03-27 01:09:06.487826 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.05s 2025-03-27 01:09:06.487839 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.61s 2025-03-27 01:09:06.487853 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.41s 2025-03-27 01:09:06.487867 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.19s 2025-03-27 01:09:06.487880 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.06s 2025-03-27 01:09:06.487894 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.85s 2025-03-27 01:09:06.487911 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:06.487929 | orchestrator | 2025-03-27 01:09:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:06.487946 | orchestrator | 2025-03-27 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:06.487978 | orchestrator | 2025-03-27 01:09:06 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:06.488731 | orchestrator | 2025-03-27 01:09:06 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:06.489325 | orchestrator | 2025-03-27 01:09:06 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:06.490137 | orchestrator | 2025-03-27 01:09:06 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:06.491839 | orchestrator | 2025-03-27 01:09:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:09.522693 | orchestrator | 2025-03-27 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:09.522831 | orchestrator | 2025-03-27 01:09:09 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:09.523173 | orchestrator | 2025-03-27 01:09:09 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:09.523206 | orchestrator | 2025-03-27 01:09:09 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:09.523732 | orchestrator | 2025-03-27 01:09:09 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:09.524246 | orchestrator | 2025-03-27 01:09:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:12.559973 | orchestrator | 2025-03-27 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:12.560102 | orchestrator | 2025-03-27 01:09:12 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:12.560366 | orchestrator | 2025-03-27 01:09:12 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:12.560400 | orchestrator | 2025-03-27 01:09:12 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:12.560831 | orchestrator | 2025-03-27 01:09:12 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:12.561372 | orchestrator | 2025-03-27 01:09:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:15.591682 | orchestrator | 2025-03-27 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:15.591840 | orchestrator | 2025-03-27 01:09:15 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:18.618392 | orchestrator | 2025-03-27 01:09:15 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:18.618565 | orchestrator | 2025-03-27 01:09:15 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:18.618589 | orchestrator | 2025-03-27 01:09:15 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:18.618605 | orchestrator | 2025-03-27 01:09:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:18.618620 | orchestrator | 2025-03-27 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:18.618652 | orchestrator | 2025-03-27 01:09:18 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:18.619594 | orchestrator | 2025-03-27 01:09:18 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:18.619631 | orchestrator | 2025-03-27 01:09:18 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:18.620726 | orchestrator | 2025-03-27 01:09:18 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:18.620798 | orchestrator | 2025-03-27 01:09:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:21.645934 | orchestrator | 2025-03-27 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:21.646124 | orchestrator | 2025-03-27 01:09:21 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:21.648076 | orchestrator | 2025-03-27 01:09:21 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:21.648117 | orchestrator | 2025-03-27 01:09:21 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:21.648882 | orchestrator | 2025-03-27 01:09:21 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:21.650248 | orchestrator | 2025-03-27 01:09:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:24.672491 | orchestrator | 2025-03-27 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:24.672650 | orchestrator | 2025-03-27 01:09:24 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:24.673308 | orchestrator | 2025-03-27 01:09:24 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:24.673373 | orchestrator | 2025-03-27 01:09:24 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:24.673752 | orchestrator | 2025-03-27 01:09:24 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:24.675237 | orchestrator | 2025-03-27 01:09:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:27.709970 | orchestrator | 2025-03-27 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:27.710350 | orchestrator | 2025-03-27 01:09:27 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:27.710844 | orchestrator | 2025-03-27 01:09:27 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:27.710885 | orchestrator | 2025-03-27 01:09:27 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:27.711562 | orchestrator | 2025-03-27 01:09:27 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:27.712381 | orchestrator | 2025-03-27 01:09:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:30.744999 | orchestrator | 2025-03-27 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:30.745120 | orchestrator | 2025-03-27 01:09:30 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:30.747985 | orchestrator | 2025-03-27 01:09:30 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:30.750328 | orchestrator | 2025-03-27 01:09:30 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:30.753160 | orchestrator | 2025-03-27 01:09:30 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:30.753943 | orchestrator | 2025-03-27 01:09:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:33.795425 | orchestrator | 2025-03-27 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:33.795615 | orchestrator | 2025-03-27 01:09:33 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:33.797775 | orchestrator | 2025-03-27 01:09:33 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:33.799407 | orchestrator | 2025-03-27 01:09:33 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:33.801584 | orchestrator | 2025-03-27 01:09:33 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:33.802750 | orchestrator | 2025-03-27 01:09:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:36.857268 | orchestrator | 2025-03-27 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:36.857407 | orchestrator | 2025-03-27 01:09:36 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:36.858691 | orchestrator | 2025-03-27 01:09:36 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:36.861703 | orchestrator | 2025-03-27 01:09:36 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:36.863659 | orchestrator | 2025-03-27 01:09:36 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:36.863695 | orchestrator | 2025-03-27 01:09:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:39.922268 | orchestrator | 2025-03-27 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:39.922400 | orchestrator | 2025-03-27 01:09:39 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:39.925378 | orchestrator | 2025-03-27 01:09:39 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:39.928440 | orchestrator | 2025-03-27 01:09:39 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:39.932670 | orchestrator | 2025-03-27 01:09:39 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:39.934373 | orchestrator | 2025-03-27 01:09:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:39.935024 | orchestrator | 2025-03-27 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:42.991986 | orchestrator | 2025-03-27 01:09:42 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:42.994943 | orchestrator | 2025-03-27 01:09:42 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:42.996408 | orchestrator | 2025-03-27 01:09:42 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:42.998994 | orchestrator | 2025-03-27 01:09:42 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:43.000895 | orchestrator | 2025-03-27 01:09:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:46.042257 | orchestrator | 2025-03-27 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:46.042381 | orchestrator | 2025-03-27 01:09:46 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:46.042556 | orchestrator | 2025-03-27 01:09:46 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:46.043490 | orchestrator | 2025-03-27 01:09:46 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:46.043968 | orchestrator | 2025-03-27 01:09:46 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:46.044701 | orchestrator | 2025-03-27 01:09:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:46.044866 | orchestrator | 2025-03-27 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:49.081042 | orchestrator | 2025-03-27 01:09:49 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:49.081254 | orchestrator | 2025-03-27 01:09:49 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:49.081280 | orchestrator | 2025-03-27 01:09:49 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state STARTED 2025-03-27 01:09:49.082668 | orchestrator | 2025-03-27 01:09:49 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:49.083399 | orchestrator | 2025-03-27 01:09:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:52.122408 | orchestrator | 2025-03-27 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:52.122583 | orchestrator | 2025-03-27 01:09:52 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:52.124343 | orchestrator | 2025-03-27 01:09:52 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:52.125927 | orchestrator | 2025-03-27 01:09:52 | INFO  | Task bbc6121a-2b0c-4da6-bab8-afa5f5b2f89a is in state SUCCESS 2025-03-27 01:09:52.127755 | orchestrator | 2025-03-27 01:09:52 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:52.129484 | orchestrator | 2025-03-27 01:09:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:52.129812 | orchestrator | 2025-03-27 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:55.187690 | orchestrator | 2025-03-27 01:09:55 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:55.188377 | orchestrator | 2025-03-27 01:09:55 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:55.189853 | orchestrator | 2025-03-27 01:09:55 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:09:55.191172 | orchestrator | 2025-03-27 01:09:55 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:55.192623 | orchestrator | 2025-03-27 01:09:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:58.247628 | orchestrator | 2025-03-27 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:09:58.247764 | orchestrator | 2025-03-27 01:09:58 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:09:58.250710 | orchestrator | 2025-03-27 01:09:58 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:09:58.252812 | orchestrator | 2025-03-27 01:09:58 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:09:58.255635 | orchestrator | 2025-03-27 01:09:58 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:09:58.258293 | orchestrator | 2025-03-27 01:09:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:09:58.258420 | orchestrator | 2025-03-27 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:01.313275 | orchestrator | 2025-03-27 01:10:01 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:01.313577 | orchestrator | 2025-03-27 01:10:01 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:01.314339 | orchestrator | 2025-03-27 01:10:01 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:01.315197 | orchestrator | 2025-03-27 01:10:01 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:10:01.316067 | orchestrator | 2025-03-27 01:10:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:04.361931 | orchestrator | 2025-03-27 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:04.362126 | orchestrator | 2025-03-27 01:10:04 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:04.363706 | orchestrator | 2025-03-27 01:10:04 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:04.365434 | orchestrator | 2025-03-27 01:10:04 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:04.367203 | orchestrator | 2025-03-27 01:10:04 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:10:04.368791 | orchestrator | 2025-03-27 01:10:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:07.421983 | orchestrator | 2025-03-27 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:07.422177 | orchestrator | 2025-03-27 01:10:07 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:07.423233 | orchestrator | 2025-03-27 01:10:07 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:07.423346 | orchestrator | 2025-03-27 01:10:07 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:07.423939 | orchestrator | 2025-03-27 01:10:07 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state STARTED 2025-03-27 01:10:07.423970 | orchestrator | 2025-03-27 01:10:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:10.467899 | orchestrator | 2025-03-27 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:10.468041 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:10.468927 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:10.482612 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:10.484804 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:10.489401 | orchestrator | 2025-03-27 01:10:10.489889 | orchestrator | 2025-03-27 01:10:10.489923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:10:10.489939 | orchestrator | 2025-03-27 01:10:10.489953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:10:10.489967 | orchestrator | Thursday 27 March 2025 01:09:10 +0000 (0:00:01.256) 0:00:01.256 ******** 2025-03-27 01:10:10.489981 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:10:10.489996 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:10:10.490010 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:10:10.490066 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:10:10.490080 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:10:10.490094 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:10:10.490108 | orchestrator | ok: [testbed-manager] 2025-03-27 01:10:10.490121 | orchestrator | 2025-03-27 01:10:10.490135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:10:10.490149 | orchestrator | Thursday 27 March 2025 01:09:13 +0000 (0:00:03.381) 0:00:04.637 ******** 2025-03-27 01:10:10.490163 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490177 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490191 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490204 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490218 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490231 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490246 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-03-27 01:10:10.490260 | orchestrator | 2025-03-27 01:10:10.490273 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-03-27 01:10:10.490287 | orchestrator | 2025-03-27 01:10:10.490300 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-03-27 01:10:10.490314 | orchestrator | Thursday 27 March 2025 01:09:16 +0000 (0:00:02.563) 0:00:07.201 ******** 2025-03-27 01:10:10.490344 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-03-27 01:10:10.490360 | orchestrator | 2025-03-27 01:10:10.490374 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-03-27 01:10:10.490388 | orchestrator | Thursday 27 March 2025 01:09:20 +0000 (0:00:04.068) 0:00:11.269 ******** 2025-03-27 01:10:10.490402 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-03-27 01:10:10.490415 | orchestrator | 2025-03-27 01:10:10.490429 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-03-27 01:10:10.490442 | orchestrator | Thursday 27 March 2025 01:09:24 +0000 (0:00:04.417) 0:00:15.686 ******** 2025-03-27 01:10:10.490457 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-03-27 01:10:10.490495 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-03-27 01:10:10.490538 | orchestrator | 2025-03-27 01:10:10.490555 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-03-27 01:10:10.490570 | orchestrator | Thursday 27 March 2025 01:09:31 +0000 (0:00:07.066) 0:00:22.752 ******** 2025-03-27 01:10:10.490585 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-03-27 01:10:10.490601 | orchestrator | 2025-03-27 01:10:10.490617 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-03-27 01:10:10.490633 | orchestrator | Thursday 27 March 2025 01:09:35 +0000 (0:00:03.430) 0:00:26.183 ******** 2025-03-27 01:10:10.490648 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:10:10.490663 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-03-27 01:10:10.490679 | orchestrator | 2025-03-27 01:10:10.490694 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-03-27 01:10:10.490709 | orchestrator | Thursday 27 March 2025 01:09:39 +0000 (0:00:04.051) 0:00:30.235 ******** 2025-03-27 01:10:10.490725 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-03-27 01:10:10.490740 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-03-27 01:10:10.490755 | orchestrator | 2025-03-27 01:10:10.490771 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-03-27 01:10:10.490786 | orchestrator | Thursday 27 March 2025 01:09:45 +0000 (0:00:06.771) 0:00:37.007 ******** 2025-03-27 01:10:10.490801 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-03-27 01:10:10.490816 | orchestrator | 2025-03-27 01:10:10.490832 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:10:10.490846 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490861 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490875 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490889 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490903 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490935 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490950 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:10:10.490964 | orchestrator | 2025-03-27 01:10:10.490978 | orchestrator | 2025-03-27 01:10:10.490992 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:10:10.491006 | orchestrator | Thursday 27 March 2025 01:09:51 +0000 (0:00:05.693) 0:00:42.700 ******** 2025-03-27 01:10:10.491020 | orchestrator | =============================================================================== 2025-03-27 01:10:10.491039 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.07s 2025-03-27 01:10:10.491053 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.77s 2025-03-27 01:10:10.491067 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.69s 2025-03-27 01:10:10.491081 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.42s 2025-03-27 01:10:10.491095 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 4.07s 2025-03-27 01:10:10.491109 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.05s 2025-03-27 01:10:10.491130 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.43s 2025-03-27 01:10:10.491144 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.38s 2025-03-27 01:10:10.491158 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.56s 2025-03-27 01:10:10.491172 | orchestrator | 2025-03-27 01:10:10.491185 | orchestrator | 2025-03-27 01:10:10.491199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:10:10.491213 | orchestrator | 2025-03-27 01:10:10.491227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:10:10.491240 | orchestrator | Thursday 27 March 2025 01:07:56 +0000 (0:00:00.530) 0:00:00.530 ******** 2025-03-27 01:10:10.491254 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:10:10.491268 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:10:10.491287 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:10:10.491301 | orchestrator | 2025-03-27 01:10:10.491315 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:10:10.491329 | orchestrator | Thursday 27 March 2025 01:07:56 +0000 (0:00:00.617) 0:00:01.148 ******** 2025-03-27 01:10:10.491343 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-03-27 01:10:10.491357 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-03-27 01:10:10.491370 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-03-27 01:10:10.491384 | orchestrator | 2025-03-27 01:10:10.491398 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-03-27 01:10:10.491411 | orchestrator | 2025-03-27 01:10:10.491425 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-27 01:10:10.491439 | orchestrator | Thursday 27 March 2025 01:07:57 +0000 (0:00:00.316) 0:00:01.465 ******** 2025-03-27 01:10:10.491453 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:10:10.491467 | orchestrator | 2025-03-27 01:10:10.491481 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-03-27 01:10:10.491495 | orchestrator | Thursday 27 March 2025 01:07:57 +0000 (0:00:00.595) 0:00:02.060 ******** 2025-03-27 01:10:10.491508 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-03-27 01:10:10.491540 | orchestrator | 2025-03-27 01:10:10.491555 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-03-27 01:10:10.491568 | orchestrator | Thursday 27 March 2025 01:08:01 +0000 (0:00:03.701) 0:00:05.762 ******** 2025-03-27 01:10:10.491582 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-03-27 01:10:10.491596 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-03-27 01:10:10.491610 | orchestrator | 2025-03-27 01:10:10.491712 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-03-27 01:10:10.491728 | orchestrator | Thursday 27 March 2025 01:08:08 +0000 (0:00:07.481) 0:00:13.243 ******** 2025-03-27 01:10:10.491742 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:10:10.491755 | orchestrator | 2025-03-27 01:10:10.491769 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-03-27 01:10:10.491783 | orchestrator | Thursday 27 March 2025 01:08:12 +0000 (0:00:03.967) 0:00:17.211 ******** 2025-03-27 01:10:10.491797 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:10:10.491810 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-03-27 01:10:10.491824 | orchestrator | 2025-03-27 01:10:10.491838 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-03-27 01:10:10.491851 | orchestrator | Thursday 27 March 2025 01:08:17 +0000 (0:00:04.394) 0:00:21.605 ******** 2025-03-27 01:10:10.491865 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:10:10.491878 | orchestrator | 2025-03-27 01:10:10.491892 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-03-27 01:10:10.491914 | orchestrator | Thursday 27 March 2025 01:08:21 +0000 (0:00:04.157) 0:00:25.762 ******** 2025-03-27 01:10:10.491928 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-03-27 01:10:10.491942 | orchestrator | 2025-03-27 01:10:10.491955 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-03-27 01:10:10.491969 | orchestrator | Thursday 27 March 2025 01:08:27 +0000 (0:00:05.681) 0:00:31.443 ******** 2025-03-27 01:10:10.491983 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.491996 | orchestrator | 2025-03-27 01:10:10.492010 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-03-27 01:10:10.492039 | orchestrator | Thursday 27 March 2025 01:08:31 +0000 (0:00:03.963) 0:00:35.407 ******** 2025-03-27 01:10:10.492054 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.492068 | orchestrator | 2025-03-27 01:10:10.492082 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-03-27 01:10:10.492096 | orchestrator | Thursday 27 March 2025 01:08:35 +0000 (0:00:04.720) 0:00:40.127 ******** 2025-03-27 01:10:10.492109 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.492123 | orchestrator | 2025-03-27 01:10:10.492137 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-03-27 01:10:10.492151 | orchestrator | Thursday 27 March 2025 01:08:40 +0000 (0:00:04.592) 0:00:44.720 ******** 2025-03-27 01:10:10.492167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.492186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.492201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.492223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.492247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.492263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.492277 | orchestrator | 2025-03-27 01:10:10.492291 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-03-27 01:10:10.492305 | orchestrator | Thursday 27 March 2025 01:08:43 +0000 (0:00:03.049) 0:00:47.770 ******** 2025-03-27 01:10:10.492319 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.492335 | orchestrator | 2025-03-27 01:10:10.492350 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-03-27 01:10:10.492366 | orchestrator | Thursday 27 March 2025 01:08:43 +0000 (0:00:00.343) 0:00:48.113 ******** 2025-03-27 01:10:10.492381 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.492396 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.492412 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.492427 | orchestrator | 2025-03-27 01:10:10.492443 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-03-27 01:10:10.492458 | orchestrator | Thursday 27 March 2025 01:08:44 +0000 (0:00:01.006) 0:00:49.120 ******** 2025-03-27 01:10:10.492474 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:10:10.492489 | orchestrator | 2025-03-27 01:10:10.492505 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-03-27 01:10:10.492572 | orchestrator | Thursday 27 March 2025 01:08:45 +0000 (0:00:00.754) 0:00:49.874 ******** 2025-03-27 01:10:10.492619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.492644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.492662 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.492688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.492704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.492719 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.492743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.492765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.492779 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.492793 | orchestrator | 2025-03-27 01:10:10.492807 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-03-27 01:10:10.492821 | orchestrator | Thursday 27 March 2025 01:08:47 +0000 (0:00:01.726) 0:00:51.601 ******** 2025-03-27 01:10:10.492835 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.492849 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.492863 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.492876 | orchestrator | 2025-03-27 01:10:10.492890 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-27 01:10:10.492904 | orchestrator | Thursday 27 March 2025 01:08:47 +0000 (0:00:00.480) 0:00:52.082 ******** 2025-03-27 01:10:10.492918 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:10:10.492932 | orchestrator | 2025-03-27 01:10:10.492945 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-03-27 01:10:10.492959 | orchestrator | Thursday 27 March 2025 01:08:50 +0000 (0:00:02.348) 0:00:54.430 ******** 2025-03-27 01:10:10.492980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.492996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493100 | orchestrator | 2025-03-27 01:10:10.493114 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-03-27 01:10:10.493129 | orchestrator | Thursday 27 March 2025 01:08:53 +0000 (0:00:03.858) 0:00:58.288 ******** 2025-03-27 01:10:10.493143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493189 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.493203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493240 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.493254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493299 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.493313 | orchestrator | 2025-03-27 01:10:10.493327 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-03-27 01:10:10.493341 | orchestrator | Thursday 27 March 2025 01:08:55 +0000 (0:00:01.823) 0:01:00.112 ******** 2025-03-27 01:10:10.493355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493385 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.493407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493467 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.493481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493496 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.493526 | orchestrator | 2025-03-27 01:10:10.493541 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-03-27 01:10:10.493555 | orchestrator | Thursday 27 March 2025 01:08:57 +0000 (0:00:01.591) 0:01:01.704 ******** 2025-03-27 01:10:10.493570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493682 | orchestrator | 2025-03-27 01:10:10.493707 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-03-27 01:10:10.493722 | orchestrator | Thursday 27 March 2025 01:09:00 +0000 (0:00:03.164) 0:01:04.868 ******** 2025-03-27 01:10:10.493746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.493797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.493863 | orchestrator | 2025-03-27 01:10:10.493877 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-03-27 01:10:10.493891 | orchestrator | Thursday 27 March 2025 01:09:15 +0000 (0:00:15.012) 0:01:19.881 ******** 2025-03-27 01:10:10.493905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493934 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.493948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.493970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.493991 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.494047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-27 01:10:10.494067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:10:10.494082 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.494096 | orchestrator | 2025-03-27 01:10:10.494110 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-03-27 01:10:10.494124 | orchestrator | Thursday 27 March 2025 01:09:17 +0000 (0:00:02.268) 0:01:22.150 ******** 2025-03-27 01:10:10.494138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.494160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.494192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-27 01:10:10.494208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.494223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.494237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:10:10.494251 | orchestrator | 2025-03-27 01:10:10.494265 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-03-27 01:10:10.494284 | orchestrator | Thursday 27 March 2025 01:09:21 +0000 (0:00:03.939) 0:01:26.089 ******** 2025-03-27 01:10:10.494298 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:10:10.494312 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:10:10.494326 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:10:10.494340 | orchestrator | 2025-03-27 01:10:10.494354 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-03-27 01:10:10.494368 | orchestrator | Thursday 27 March 2025 01:09:22 +0000 (0:00:00.653) 0:01:26.743 ******** 2025-03-27 01:10:10.494388 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.494402 | orchestrator | 2025-03-27 01:10:10.494416 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-03-27 01:10:10.494429 | orchestrator | Thursday 27 March 2025 01:09:25 +0000 (0:00:03.081) 0:01:29.825 ******** 2025-03-27 01:10:10.494443 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.494457 | orchestrator | 2025-03-27 01:10:10.494471 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-03-27 01:10:10.494484 | orchestrator | Thursday 27 March 2025 01:09:28 +0000 (0:00:02.670) 0:01:32.496 ******** 2025-03-27 01:10:10.494498 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:10.494567 | orchestrator | 2025-03-27 01:10:10.494591 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-27 01:10:13.524687 | orchestrator | Thursday 27 March 2025 01:09:43 +0000 (0:00:15.295) 0:01:47.791 ******** 2025-03-27 01:10:13.524804 | orchestrator | 2025-03-27 01:10:13.524824 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-27 01:10:13.524840 | orchestrator | Thursday 27 March 2025 01:09:43 +0000 (0:00:00.061) 0:01:47.853 ******** 2025-03-27 01:10:13.524855 | orchestrator | 2025-03-27 01:10:13.524870 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-03-27 01:10:13.524884 | orchestrator | Thursday 27 March 2025 01:09:43 +0000 (0:00:00.192) 0:01:48.046 ******** 2025-03-27 01:10:13.524899 | orchestrator | 2025-03-27 01:10:13.524913 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-03-27 01:10:13.524928 | orchestrator | Thursday 27 March 2025 01:09:43 +0000 (0:00:00.060) 0:01:48.106 ******** 2025-03-27 01:10:13.524942 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:13.524957 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:10:13.524972 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:10:13.524986 | orchestrator | 2025-03-27 01:10:13.525000 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-03-27 01:10:13.525014 | orchestrator | Thursday 27 March 2025 01:09:59 +0000 (0:00:15.871) 0:02:03.978 ******** 2025-03-27 01:10:13.525134 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:10:13.525153 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:10:13.525167 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:10:13.525181 | orchestrator | 2025-03-27 01:10:13.525195 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:10:13.525210 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-27 01:10:13.525226 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:10:13.525240 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:10:13.525254 | orchestrator | 2025-03-27 01:10:13.525268 | orchestrator | 2025-03-27 01:10:13.525282 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:10:13.525296 | orchestrator | Thursday 27 March 2025 01:10:08 +0000 (0:00:09.289) 0:02:13.267 ******** 2025-03-27 01:10:13.525310 | orchestrator | =============================================================================== 2025-03-27 01:10:13.525324 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.87s 2025-03-27 01:10:13.525338 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.30s 2025-03-27 01:10:13.525351 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 15.01s 2025-03-27 01:10:13.525365 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.29s 2025-03-27 01:10:13.525379 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.48s 2025-03-27 01:10:13.525392 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 5.68s 2025-03-27 01:10:13.525433 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.72s 2025-03-27 01:10:13.525448 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.59s 2025-03-27 01:10:13.525462 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.40s 2025-03-27 01:10:13.525475 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.16s 2025-03-27 01:10:13.525489 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.97s 2025-03-27 01:10:13.525541 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.96s 2025-03-27 01:10:13.525557 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.94s 2025-03-27 01:10:13.525571 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.86s 2025-03-27 01:10:13.525585 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.70s 2025-03-27 01:10:13.525598 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.16s 2025-03-27 01:10:13.525612 | orchestrator | magnum : Creating Magnum database --------------------------------------- 3.08s 2025-03-27 01:10:13.525625 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.05s 2025-03-27 01:10:13.525639 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.67s 2025-03-27 01:10:13.525653 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.35s 2025-03-27 01:10:13.525667 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task 26860dcf-2422-4523-b52e-2971b2471176 is in state SUCCESS 2025-03-27 01:10:13.525687 | orchestrator | 2025-03-27 01:10:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:13.525702 | orchestrator | 2025-03-27 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:13.525733 | orchestrator | 2025-03-27 01:10:13 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:13.526792 | orchestrator | 2025-03-27 01:10:13 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:13.526828 | orchestrator | 2025-03-27 01:10:13 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:13.527752 | orchestrator | 2025-03-27 01:10:13 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:13.529601 | orchestrator | 2025-03-27 01:10:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:16.571023 | orchestrator | 2025-03-27 01:10:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:16.571169 | orchestrator | 2025-03-27 01:10:16 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:16.573344 | orchestrator | 2025-03-27 01:10:16 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:16.573380 | orchestrator | 2025-03-27 01:10:16 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:16.575754 | orchestrator | 2025-03-27 01:10:16 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:16.578567 | orchestrator | 2025-03-27 01:10:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:16.578808 | orchestrator | 2025-03-27 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:19.619744 | orchestrator | 2025-03-27 01:10:19 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:19.619983 | orchestrator | 2025-03-27 01:10:19 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:19.620861 | orchestrator | 2025-03-27 01:10:19 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:19.623392 | orchestrator | 2025-03-27 01:10:19 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:22.666108 | orchestrator | 2025-03-27 01:10:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:22.666231 | orchestrator | 2025-03-27 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:22.666267 | orchestrator | 2025-03-27 01:10:22 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:22.666567 | orchestrator | 2025-03-27 01:10:22 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:22.667265 | orchestrator | 2025-03-27 01:10:22 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:22.668046 | orchestrator | 2025-03-27 01:10:22 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:22.668813 | orchestrator | 2025-03-27 01:10:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:25.707676 | orchestrator | 2025-03-27 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:25.707807 | orchestrator | 2025-03-27 01:10:25 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:25.708171 | orchestrator | 2025-03-27 01:10:25 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:25.709214 | orchestrator | 2025-03-27 01:10:25 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:25.709928 | orchestrator | 2025-03-27 01:10:25 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:25.710846 | orchestrator | 2025-03-27 01:10:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:28.754245 | orchestrator | 2025-03-27 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:28.754400 | orchestrator | 2025-03-27 01:10:28 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:28.755825 | orchestrator | 2025-03-27 01:10:28 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:28.756944 | orchestrator | 2025-03-27 01:10:28 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:28.756975 | orchestrator | 2025-03-27 01:10:28 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:28.760221 | orchestrator | 2025-03-27 01:10:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:31.803864 | orchestrator | 2025-03-27 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:31.803997 | orchestrator | 2025-03-27 01:10:31 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:31.807739 | orchestrator | 2025-03-27 01:10:31 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:31.808247 | orchestrator | 2025-03-27 01:10:31 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:31.808362 | orchestrator | 2025-03-27 01:10:31 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:31.808397 | orchestrator | 2025-03-27 01:10:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:34.848637 | orchestrator | 2025-03-27 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:34.848757 | orchestrator | 2025-03-27 01:10:34 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:34.850432 | orchestrator | 2025-03-27 01:10:34 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:34.852713 | orchestrator | 2025-03-27 01:10:34 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:34.854384 | orchestrator | 2025-03-27 01:10:34 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:34.856336 | orchestrator | 2025-03-27 01:10:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:34.856818 | orchestrator | 2025-03-27 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:37.902512 | orchestrator | 2025-03-27 01:10:37 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:37.910324 | orchestrator | 2025-03-27 01:10:37 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:37.910367 | orchestrator | 2025-03-27 01:10:37 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:40.954717 | orchestrator | 2025-03-27 01:10:37 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:40.954922 | orchestrator | 2025-03-27 01:10:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:40.954946 | orchestrator | 2025-03-27 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:40.954979 | orchestrator | 2025-03-27 01:10:40 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:40.956158 | orchestrator | 2025-03-27 01:10:40 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:40.956191 | orchestrator | 2025-03-27 01:10:40 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:40.956984 | orchestrator | 2025-03-27 01:10:40 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:40.957925 | orchestrator | 2025-03-27 01:10:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:43.997579 | orchestrator | 2025-03-27 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:43.997672 | orchestrator | 2025-03-27 01:10:43 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:43.997883 | orchestrator | 2025-03-27 01:10:43 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:43.998931 | orchestrator | 2025-03-27 01:10:43 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:44.000322 | orchestrator | 2025-03-27 01:10:43 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:44.006538 | orchestrator | 2025-03-27 01:10:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:47.058332 | orchestrator | 2025-03-27 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:47.058471 | orchestrator | 2025-03-27 01:10:47 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:47.059640 | orchestrator | 2025-03-27 01:10:47 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:47.060760 | orchestrator | 2025-03-27 01:10:47 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:47.062565 | orchestrator | 2025-03-27 01:10:47 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:47.063853 | orchestrator | 2025-03-27 01:10:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:50.110399 | orchestrator | 2025-03-27 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:50.110690 | orchestrator | 2025-03-27 01:10:50 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:50.112108 | orchestrator | 2025-03-27 01:10:50 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:50.112144 | orchestrator | 2025-03-27 01:10:50 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:50.114543 | orchestrator | 2025-03-27 01:10:50 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:50.116123 | orchestrator | 2025-03-27 01:10:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:53.155452 | orchestrator | 2025-03-27 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:53.155635 | orchestrator | 2025-03-27 01:10:53 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:53.157167 | orchestrator | 2025-03-27 01:10:53 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:53.157832 | orchestrator | 2025-03-27 01:10:53 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:53.158650 | orchestrator | 2025-03-27 01:10:53 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:53.159413 | orchestrator | 2025-03-27 01:10:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:56.198359 | orchestrator | 2025-03-27 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:56.198496 | orchestrator | 2025-03-27 01:10:56 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:56.198682 | orchestrator | 2025-03-27 01:10:56 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:56.200749 | orchestrator | 2025-03-27 01:10:56 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:56.201149 | orchestrator | 2025-03-27 01:10:56 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:56.202256 | orchestrator | 2025-03-27 01:10:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:59.244125 | orchestrator | 2025-03-27 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:10:59.244376 | orchestrator | 2025-03-27 01:10:59 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:10:59.244977 | orchestrator | 2025-03-27 01:10:59 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:10:59.245013 | orchestrator | 2025-03-27 01:10:59 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:10:59.245549 | orchestrator | 2025-03-27 01:10:59 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:10:59.246100 | orchestrator | 2025-03-27 01:10:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:10:59.246204 | orchestrator | 2025-03-27 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:02.285668 | orchestrator | 2025-03-27 01:11:02 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:02.288868 | orchestrator | 2025-03-27 01:11:02 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:02.288912 | orchestrator | 2025-03-27 01:11:02 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:02.291295 | orchestrator | 2025-03-27 01:11:02 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:02.292592 | orchestrator | 2025-03-27 01:11:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:05.326590 | orchestrator | 2025-03-27 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:05.326902 | orchestrator | 2025-03-27 01:11:05 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:05.326933 | orchestrator | 2025-03-27 01:11:05 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:05.326953 | orchestrator | 2025-03-27 01:11:05 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:05.327761 | orchestrator | 2025-03-27 01:11:05 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:05.328336 | orchestrator | 2025-03-27 01:11:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:05.328552 | orchestrator | 2025-03-27 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:08.368435 | orchestrator | 2025-03-27 01:11:08 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:08.376219 | orchestrator | 2025-03-27 01:11:08 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:08.378000 | orchestrator | 2025-03-27 01:11:08 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:08.378433 | orchestrator | 2025-03-27 01:11:08 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:08.380419 | orchestrator | 2025-03-27 01:11:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:11.463681 | orchestrator | 2025-03-27 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:11.463845 | orchestrator | 2025-03-27 01:11:11 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:14.495746 | orchestrator | 2025-03-27 01:11:11 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:14.495859 | orchestrator | 2025-03-27 01:11:11 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:14.495878 | orchestrator | 2025-03-27 01:11:11 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:14.495893 | orchestrator | 2025-03-27 01:11:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:14.495908 | orchestrator | 2025-03-27 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:14.495940 | orchestrator | 2025-03-27 01:11:14 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:14.500500 | orchestrator | 2025-03-27 01:11:14 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:14.507394 | orchestrator | 2025-03-27 01:11:14 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:14.507430 | orchestrator | 2025-03-27 01:11:14 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:14.508423 | orchestrator | 2025-03-27 01:11:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:14.508725 | orchestrator | 2025-03-27 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:17.563024 | orchestrator | 2025-03-27 01:11:17 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:17.565774 | orchestrator | 2025-03-27 01:11:17 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:17.568265 | orchestrator | 2025-03-27 01:11:17 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:17.572393 | orchestrator | 2025-03-27 01:11:17 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:17.573190 | orchestrator | 2025-03-27 01:11:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:17.574128 | orchestrator | 2025-03-27 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:20.614966 | orchestrator | 2025-03-27 01:11:20 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:20.616939 | orchestrator | 2025-03-27 01:11:20 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:20.624273 | orchestrator | 2025-03-27 01:11:20 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:20.625799 | orchestrator | 2025-03-27 01:11:20 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:20.630808 | orchestrator | 2025-03-27 01:11:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:23.671415 | orchestrator | 2025-03-27 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:23.671588 | orchestrator | 2025-03-27 01:11:23 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:23.671844 | orchestrator | 2025-03-27 01:11:23 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:23.673621 | orchestrator | 2025-03-27 01:11:23 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:23.673942 | orchestrator | 2025-03-27 01:11:23 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:23.676646 | orchestrator | 2025-03-27 01:11:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:26.718283 | orchestrator | 2025-03-27 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:26.718423 | orchestrator | 2025-03-27 01:11:26 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:26.719931 | orchestrator | 2025-03-27 01:11:26 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:26.720394 | orchestrator | 2025-03-27 01:11:26 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:26.721968 | orchestrator | 2025-03-27 01:11:26 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:26.722843 | orchestrator | 2025-03-27 01:11:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:29.769198 | orchestrator | 2025-03-27 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:29.769340 | orchestrator | 2025-03-27 01:11:29 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:29.769816 | orchestrator | 2025-03-27 01:11:29 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:29.770695 | orchestrator | 2025-03-27 01:11:29 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:29.772484 | orchestrator | 2025-03-27 01:11:29 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:29.773710 | orchestrator | 2025-03-27 01:11:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:32.813437 | orchestrator | 2025-03-27 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:32.813736 | orchestrator | 2025-03-27 01:11:32 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:32.814401 | orchestrator | 2025-03-27 01:11:32 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:32.814442 | orchestrator | 2025-03-27 01:11:32 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:32.815145 | orchestrator | 2025-03-27 01:11:32 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:32.815739 | orchestrator | 2025-03-27 01:11:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:35.843744 | orchestrator | 2025-03-27 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:35.843882 | orchestrator | 2025-03-27 01:11:35 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:35.844443 | orchestrator | 2025-03-27 01:11:35 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:35.844477 | orchestrator | 2025-03-27 01:11:35 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:35.845051 | orchestrator | 2025-03-27 01:11:35 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:35.845815 | orchestrator | 2025-03-27 01:11:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:38.867289 | orchestrator | 2025-03-27 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:38.867566 | orchestrator | 2025-03-27 01:11:38 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state STARTED 2025-03-27 01:11:38.867981 | orchestrator | 2025-03-27 01:11:38 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:38.868015 | orchestrator | 2025-03-27 01:11:38 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:38.868676 | orchestrator | 2025-03-27 01:11:38 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:38.869272 | orchestrator | 2025-03-27 01:11:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:41.919159 | orchestrator | 2025-03-27 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:41.919289 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task f3b39d8b-ce46-4c14-a444-c2fd3631526c is in state SUCCESS 2025-03-27 01:11:41.921373 | orchestrator | 2025-03-27 01:11:41.921412 | orchestrator | 2025-03-27 01:11:41.921427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:11:41.921442 | orchestrator | 2025-03-27 01:11:41.921456 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:11:41.921470 | orchestrator | Thursday 27 March 2025 01:05:31 +0000 (0:00:00.401) 0:00:00.401 ******** 2025-03-27 01:11:41.921484 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:11:41.921500 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:11:41.921514 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:11:41.921568 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:11:41.921582 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:11:41.921595 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:11:41.921609 | orchestrator | 2025-03-27 01:11:41.921623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:11:41.921637 | orchestrator | Thursday 27 March 2025 01:05:32 +0000 (0:00:01.061) 0:00:01.463 ******** 2025-03-27 01:11:41.921651 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-03-27 01:11:41.921664 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-03-27 01:11:41.921679 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-03-27 01:11:41.921716 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-03-27 01:11:41.921730 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-03-27 01:11:41.921744 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-03-27 01:11:41.921757 | orchestrator | 2025-03-27 01:11:41.921771 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-03-27 01:11:41.921785 | orchestrator | 2025-03-27 01:11:41.921798 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-27 01:11:41.921812 | orchestrator | Thursday 27 March 2025 01:05:33 +0000 (0:00:00.827) 0:00:02.291 ******** 2025-03-27 01:11:41.921827 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:11:41.921842 | orchestrator | 2025-03-27 01:11:41.921856 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-03-27 01:11:41.921870 | orchestrator | Thursday 27 March 2025 01:05:35 +0000 (0:00:01.517) 0:00:03.808 ******** 2025-03-27 01:11:41.921884 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:11:41.921898 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:11:41.922224 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:11:41.922246 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:11:41.922261 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:11:41.922277 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:11:41.922305 | orchestrator | 2025-03-27 01:11:41.922322 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-03-27 01:11:41.922336 | orchestrator | Thursday 27 March 2025 01:05:36 +0000 (0:00:01.881) 0:00:05.689 ******** 2025-03-27 01:11:41.922350 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:11:41.922364 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:11:41.922378 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:11:41.922406 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:11:41.922419 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:11:41.922433 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:11:41.922447 | orchestrator | 2025-03-27 01:11:41.922464 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-03-27 01:11:41.922478 | orchestrator | Thursday 27 March 2025 01:05:38 +0000 (0:00:01.380) 0:00:07.069 ******** 2025-03-27 01:11:41.922492 | orchestrator | ok: [testbed-node-0] => { 2025-03-27 01:11:41.922506 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922567 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922583 | orchestrator | } 2025-03-27 01:11:41.922598 | orchestrator | ok: [testbed-node-1] => { 2025-03-27 01:11:41.922611 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922625 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922639 | orchestrator | } 2025-03-27 01:11:41.922652 | orchestrator | ok: [testbed-node-2] => { 2025-03-27 01:11:41.922666 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922680 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922693 | orchestrator | } 2025-03-27 01:11:41.922707 | orchestrator | ok: [testbed-node-3] => { 2025-03-27 01:11:41.922721 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922735 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922748 | orchestrator | } 2025-03-27 01:11:41.922762 | orchestrator | ok: [testbed-node-4] => { 2025-03-27 01:11:41.922776 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922790 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922803 | orchestrator | } 2025-03-27 01:11:41.922817 | orchestrator | ok: [testbed-node-5] => { 2025-03-27 01:11:41.922830 | orchestrator |  "changed": false, 2025-03-27 01:11:41.922844 | orchestrator |  "msg": "All assertions passed" 2025-03-27 01:11:41.922857 | orchestrator | } 2025-03-27 01:11:41.922871 | orchestrator | 2025-03-27 01:11:41.922885 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-03-27 01:11:41.922899 | orchestrator | Thursday 27 March 2025 01:05:39 +0000 (0:00:00.960) 0:00:08.030 ******** 2025-03-27 01:11:41.922925 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.922939 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.922952 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.922966 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.922979 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.922992 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.923006 | orchestrator | 2025-03-27 01:11:41.923020 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-03-27 01:11:41.923034 | orchestrator | Thursday 27 March 2025 01:05:40 +0000 (0:00:00.766) 0:00:08.796 ******** 2025-03-27 01:11:41.923048 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-03-27 01:11:41.923061 | orchestrator | 2025-03-27 01:11:41.923075 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-03-27 01:11:41.923089 | orchestrator | Thursday 27 March 2025 01:05:44 +0000 (0:00:04.206) 0:00:13.003 ******** 2025-03-27 01:11:41.923103 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-03-27 01:11:41.923736 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-03-27 01:11:41.923768 | orchestrator | 2025-03-27 01:11:41.923816 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-03-27 01:11:41.923833 | orchestrator | Thursday 27 March 2025 01:05:51 +0000 (0:00:07.590) 0:00:20.593 ******** 2025-03-27 01:11:41.923847 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:11:41.923861 | orchestrator | 2025-03-27 01:11:41.923882 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-03-27 01:11:41.923897 | orchestrator | Thursday 27 March 2025 01:05:55 +0000 (0:00:03.793) 0:00:24.386 ******** 2025-03-27 01:11:41.923911 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:11:41.923925 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-03-27 01:11:41.923940 | orchestrator | 2025-03-27 01:11:41.923954 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-03-27 01:11:41.923968 | orchestrator | Thursday 27 March 2025 01:05:59 +0000 (0:00:04.092) 0:00:28.479 ******** 2025-03-27 01:11:41.923982 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:11:41.923996 | orchestrator | 2025-03-27 01:11:41.924010 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-03-27 01:11:41.924024 | orchestrator | Thursday 27 March 2025 01:06:03 +0000 (0:00:03.665) 0:00:32.144 ******** 2025-03-27 01:11:41.924038 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-03-27 01:11:41.924052 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-03-27 01:11:41.924066 | orchestrator | 2025-03-27 01:11:41.924080 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-27 01:11:41.924094 | orchestrator | Thursday 27 March 2025 01:06:12 +0000 (0:00:09.298) 0:00:41.443 ******** 2025-03-27 01:11:41.924108 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.924127 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.924141 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.924155 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.924169 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.924183 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.924197 | orchestrator | 2025-03-27 01:11:41.924212 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-03-27 01:11:41.924226 | orchestrator | Thursday 27 March 2025 01:06:13 +0000 (0:00:00.777) 0:00:42.221 ******** 2025-03-27 01:11:41.924240 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.924254 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.924268 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.924282 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.924296 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.924310 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.924336 | orchestrator | 2025-03-27 01:11:41.924350 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-03-27 01:11:41.924365 | orchestrator | Thursday 27 March 2025 01:06:18 +0000 (0:00:05.418) 0:00:47.639 ******** 2025-03-27 01:11:41.925116 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:11:41.925131 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:11:41.925144 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:11:41.925157 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:11:41.925170 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:11:41.925182 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:11:41.925195 | orchestrator | 2025-03-27 01:11:41.925207 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-27 01:11:41.925220 | orchestrator | Thursday 27 March 2025 01:06:20 +0000 (0:00:01.297) 0:00:48.937 ******** 2025-03-27 01:11:41.925233 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.925246 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.925259 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.925271 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.925284 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.925296 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.925316 | orchestrator | 2025-03-27 01:11:41.925330 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-03-27 01:11:41.925342 | orchestrator | Thursday 27 March 2025 01:06:24 +0000 (0:00:03.971) 0:00:52.909 ******** 2025-03-27 01:11:41.925358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.925427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.925474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.925491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.925517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.925894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.925911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.926011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.926075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.926114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.926142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.926293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.926339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.926353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.926367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.926542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.926567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.927139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.927239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.927306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.927345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.927673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.927687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.927700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.927714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.928409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.928426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.928437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.928459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.928470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.928480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.928576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.928608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.928678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.928694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.928715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.928726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.928737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.928848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.929234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.929379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.929438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.929449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.929547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.929577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.929589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.929622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.929640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.929702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.929800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.929828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.929839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.929877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.929923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.930274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.930300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.930312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.930336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.930358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.930368 | orchestrator | 2025-03-27 01:11:41.930379 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-03-27 01:11:41.930389 | orchestrator | Thursday 27 March 2025 01:06:29 +0000 (0:00:05.812) 0:00:58.721 ******** 2025-03-27 01:11:41.930400 | orchestrator | [WARNING]: Skipped 2025-03-27 01:11:41.930411 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-03-27 01:11:41.930421 | orchestrator | due to this access issue: 2025-03-27 01:11:41.930432 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-03-27 01:11:41.930441 | orchestrator | a directory 2025-03-27 01:11:41.930451 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:11:41.930461 | orchestrator | 2025-03-27 01:11:41.930783 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-27 01:11:41.930805 | orchestrator | Thursday 27 March 2025 01:06:31 +0000 (0:00:01.426) 0:01:00.147 ******** 2025-03-27 01:11:41.930816 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:11:41.930827 | orchestrator | 2025-03-27 01:11:41.930837 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-03-27 01:11:41.930847 | orchestrator | Thursday 27 March 2025 01:06:33 +0000 (0:00:02.296) 0:01:02.444 ******** 2025-03-27 01:11:41.930871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.930892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.930911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.930922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.931208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.931226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.931235 | orchestrator | 2025-03-27 01:11:41.931244 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-03-27 01:11:41.931253 | orchestrator | Thursday 27 March 2025 01:06:42 +0000 (0:00:09.261) 0:01:11.706 ******** 2025-03-27 01:11:41.931262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931277 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.931286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931295 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.931358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931380 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.931389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931398 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.931407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931422 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.931431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931440 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.931448 | orchestrator | 2025-03-27 01:11:41.931457 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-03-27 01:11:41.931465 | orchestrator | Thursday 27 March 2025 01:06:48 +0000 (0:00:06.024) 0:01:17.730 ******** 2025-03-27 01:11:41.931474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931482 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.931556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931580 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.931589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931604 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.931613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931622 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.931630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931639 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.931654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.931663 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.931672 | orchestrator | 2025-03-27 01:11:41.931681 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-03-27 01:11:41.931689 | orchestrator | Thursday 27 March 2025 01:06:54 +0000 (0:00:05.561) 0:01:23.292 ******** 2025-03-27 01:11:41.931698 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.931783 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.931797 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.931807 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.931815 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.931824 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.931833 | orchestrator | 2025-03-27 01:11:41.931841 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-03-27 01:11:41.931850 | orchestrator | Thursday 27 March 2025 01:07:01 +0000 (0:00:07.421) 0:01:30.713 ******** 2025-03-27 01:11:41.931859 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.931874 | orchestrator | 2025-03-27 01:11:41.931886 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-03-27 01:11:41.931895 | orchestrator | Thursday 27 March 2025 01:07:02 +0000 (0:00:00.211) 0:01:30.925 ******** 2025-03-27 01:11:41.931904 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.931912 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.931921 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.931929 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.931938 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.931946 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.931955 | orchestrator | 2025-03-27 01:11:41.931963 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-03-27 01:11:41.931972 | orchestrator | Thursday 27 March 2025 01:07:03 +0000 (0:00:01.203) 0:01:32.129 ******** 2025-03-27 01:11:41.931981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.931990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.931999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.932067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.932086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.932095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.932104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.932114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.934328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.934389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.934430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.934457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934470 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.934494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.934508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.934608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.934691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.934733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.934772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.934790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934803 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.934831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.934845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.934909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.934958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.934970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.934990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.935031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.935070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935107 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.935134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.935148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.935206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.935336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.935350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.935417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.935450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.935553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935599 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.935612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.935652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.935789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.935862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.935890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.935931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.935945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.935976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.935995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936008 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.936020 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.936033 | orchestrator | 2025-03-27 01:11:41.936045 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-03-27 01:11:41.936058 | orchestrator | Thursday 27 March 2025 01:07:07 +0000 (0:00:04.594) 0:01:36.723 ******** 2025-03-27 01:11:41.936071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.936092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.936158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.936225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.936310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.936394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.936460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.936576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.936603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.936655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.936677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.936703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.936740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.936827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.936874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.936900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.936947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.936974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.936987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.937047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.937060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937086 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.937099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.937152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.937236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.937260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.937274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.937322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.937355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.937381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.937439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.937474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.937488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.937500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.937617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.937681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.937734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.937766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.937779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.938004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.938086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.938107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938119 | orchestrator | 2025-03-27 01:11:41.938132 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-03-27 01:11:41.938145 | orchestrator | Thursday 27 March 2025 01:07:13 +0000 (0:00:05.088) 0:01:41.812 ******** 2025-03-27 01:11:41.938172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.938186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.938243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.938320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.938379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.938454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.938517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.938607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.938637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.938684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.938697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.938728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.938786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.938862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.938888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.938906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.938939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.938952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.938978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.938997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.939017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.939057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.939070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.939113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.939170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.939241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.939278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.939319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.939332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.939368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.939381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.939457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.939470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.939716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.939751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.939806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.939833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.939846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.939873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.939899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939912 | orchestrator | 2025-03-27 01:11:41.939925 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-03-27 01:11:41.939937 | orchestrator | Thursday 27 March 2025 01:07:23 +0000 (0:00:10.659) 0:01:52.472 ******** 2025-03-27 01:11:41.939950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.939963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.939989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.940027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.940129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.940169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940200 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.940218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.940231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.940263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.940339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.940424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.940577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.940667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.940743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.940788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.940846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.940896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.940958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.940969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.940980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.940990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.941018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.941036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941047 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.941057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.941068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.941121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.941182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.941203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.941345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.941368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.941390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.941497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.941628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.941650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.941661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.941734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.941755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941766 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.941776 | orchestrator | 2025-03-27 01:11:41.941786 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-03-27 01:11:41.941796 | orchestrator | Thursday 27 March 2025 01:07:28 +0000 (0:00:04.587) 0:01:57.060 ******** 2025-03-27 01:11:41.941806 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.941816 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:11:41.941826 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:11:41.941836 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.941846 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:11:41.941857 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.941867 | orchestrator | 2025-03-27 01:11:41.941877 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-03-27 01:11:41.941887 | orchestrator | Thursday 27 March 2025 01:07:34 +0000 (0:00:05.901) 0:02:02.962 ******** 2025-03-27 01:11:41.941897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.941908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.941990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.942002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.942060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.942287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.942313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.942324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.942448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.942479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.942588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942598 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.942618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.942629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.942719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.942735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942745 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.942756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.942775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.942867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.942888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.942925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.942996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.943134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.943145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.943192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943317 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.943335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.943439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.943486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.943635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.943737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.943884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.943898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.943918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.943935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.943946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.944017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.944066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.944130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.944147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.944170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.944194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.944246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.944280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.944289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.944298 | orchestrator | 2025-03-27 01:11:41.944307 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-03-27 01:11:41.944315 | orchestrator | Thursday 27 March 2025 01:07:39 +0000 (0:00:04.970) 0:02:07.932 ******** 2025-03-27 01:11:41.944324 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944333 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944341 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944349 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944358 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944366 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944375 | orchestrator | 2025-03-27 01:11:41.944383 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-03-27 01:11:41.944392 | orchestrator | Thursday 27 March 2025 01:07:43 +0000 (0:00:04.093) 0:02:12.026 ******** 2025-03-27 01:11:41.944400 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944411 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944420 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944429 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944437 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944446 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944454 | orchestrator | 2025-03-27 01:11:41.944463 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-03-27 01:11:41.944472 | orchestrator | Thursday 27 March 2025 01:07:46 +0000 (0:00:02.924) 0:02:14.951 ******** 2025-03-27 01:11:41.944480 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944488 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944497 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944505 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944513 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944541 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944550 | orchestrator | 2025-03-27 01:11:41.944603 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-03-27 01:11:41.944615 | orchestrator | Thursday 27 March 2025 01:07:49 +0000 (0:00:03.297) 0:02:18.248 ******** 2025-03-27 01:11:41.944623 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944632 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944640 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944649 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944657 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944665 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944674 | orchestrator | 2025-03-27 01:11:41.944682 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-03-27 01:11:41.944691 | orchestrator | Thursday 27 March 2025 01:07:54 +0000 (0:00:05.218) 0:02:23.466 ******** 2025-03-27 01:11:41.944699 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944708 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944716 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944724 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944733 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944741 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944749 | orchestrator | 2025-03-27 01:11:41.944758 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-03-27 01:11:41.944766 | orchestrator | Thursday 27 March 2025 01:07:57 +0000 (0:00:02.604) 0:02:26.071 ******** 2025-03-27 01:11:41.944774 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944783 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944791 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944799 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944808 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944816 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944825 | orchestrator | 2025-03-27 01:11:41.944833 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-03-27 01:11:41.944841 | orchestrator | Thursday 27 March 2025 01:07:59 +0000 (0:00:02.464) 0:02:28.536 ******** 2025-03-27 01:11:41.944850 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944858 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.944867 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944875 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.944884 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944892 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.944901 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944909 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.944918 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944926 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.944935 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-03-27 01:11:41.944943 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.944951 | orchestrator | 2025-03-27 01:11:41.944960 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-03-27 01:11:41.944968 | orchestrator | Thursday 27 March 2025 01:08:02 +0000 (0:00:03.202) 0:02:31.738 ******** 2025-03-27 01:11:41.944977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.944992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.945084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.945148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.945267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.945298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.945321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.945419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.945433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945451 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.945502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.945559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.945585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.945672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.945685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945694 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.945703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.945717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.945811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.945916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.945939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.945947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.945963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.946011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946051 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.946060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.946073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.946161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.946280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.946313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946372 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.946381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.946395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.946479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.946612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.946646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946662 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.946710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.946727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.946809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.946931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.946943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.946967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.946976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.946984 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.946992 | orchestrator | 2025-03-27 01:11:41.947000 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-03-27 01:11:41.947008 | orchestrator | Thursday 27 March 2025 01:08:08 +0000 (0:00:05.187) 0:02:36.926 ******** 2025-03-27 01:11:41.947064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.947076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.947124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.947215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.947275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.947304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.947312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947325 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.947340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.947389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.947438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.947543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.947566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.947636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.947644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947657 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.947672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.947720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.947761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.947825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.947882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.947949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.947957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.947973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.947981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.948058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948104 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.948152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.948201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.948268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948289 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.948305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.948314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.948395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.948516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.948627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948653 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.948670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.948679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.948735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.948815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.948828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.948843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.948865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.948877 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.948885 | orchestrator | 2025-03-27 01:11:41.948892 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-03-27 01:11:41.948899 | orchestrator | Thursday 27 March 2025 01:08:12 +0000 (0:00:04.337) 0:02:41.264 ******** 2025-03-27 01:11:41.948906 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.948913 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.948920 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.948930 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.948937 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.948944 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.948951 | orchestrator | 2025-03-27 01:11:41.948958 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-03-27 01:11:41.948965 | orchestrator | Thursday 27 March 2025 01:08:17 +0000 (0:00:05.098) 0:02:46.363 ******** 2025-03-27 01:11:41.948972 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.948979 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.948986 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.948992 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:11:41.948999 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:11:41.949006 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:11:41.949013 | orchestrator | 2025-03-27 01:11:41.949020 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-03-27 01:11:41.949027 | orchestrator | Thursday 27 March 2025 01:08:25 +0000 (0:00:07.882) 0:02:54.248 ******** 2025-03-27 01:11:41.949033 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949040 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949047 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949054 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949060 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949067 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949074 | orchestrator | 2025-03-27 01:11:41.949081 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-03-27 01:11:41.949088 | orchestrator | Thursday 27 March 2025 01:08:30 +0000 (0:00:05.491) 0:02:59.739 ******** 2025-03-27 01:11:41.949095 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949101 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949108 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949115 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949122 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949129 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949135 | orchestrator | 2025-03-27 01:11:41.949142 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-03-27 01:11:41.949149 | orchestrator | Thursday 27 March 2025 01:08:36 +0000 (0:00:05.538) 0:03:05.278 ******** 2025-03-27 01:11:41.949156 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949163 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949170 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949176 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949183 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949190 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949197 | orchestrator | 2025-03-27 01:11:41.949204 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-03-27 01:11:41.949211 | orchestrator | Thursday 27 March 2025 01:08:40 +0000 (0:00:04.407) 0:03:09.685 ******** 2025-03-27 01:11:41.949218 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949224 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949231 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949238 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949245 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949251 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949262 | orchestrator | 2025-03-27 01:11:41.949271 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-03-27 01:11:41.949278 | orchestrator | Thursday 27 March 2025 01:08:46 +0000 (0:00:05.076) 0:03:14.762 ******** 2025-03-27 01:11:41.949286 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949293 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949301 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949309 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949317 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949324 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949332 | orchestrator | 2025-03-27 01:11:41.949339 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-03-27 01:11:41.949347 | orchestrator | Thursday 27 March 2025 01:08:50 +0000 (0:00:04.801) 0:03:19.564 ******** 2025-03-27 01:11:41.949355 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949363 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949371 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949379 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949386 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949394 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949402 | orchestrator | 2025-03-27 01:11:41.949410 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-03-27 01:11:41.949417 | orchestrator | Thursday 27 March 2025 01:09:00 +0000 (0:00:09.781) 0:03:29.346 ******** 2025-03-27 01:11:41.949425 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949432 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949440 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949448 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949456 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949478 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949490 | orchestrator | 2025-03-27 01:11:41.949499 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-03-27 01:11:41.949507 | orchestrator | Thursday 27 March 2025 01:09:08 +0000 (0:00:07.862) 0:03:37.209 ******** 2025-03-27 01:11:41.949514 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949537 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949545 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949553 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949561 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949569 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949576 | orchestrator | 2025-03-27 01:11:41.949584 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-03-27 01:11:41.949592 | orchestrator | Thursday 27 March 2025 01:09:15 +0000 (0:00:06.639) 0:03:43.849 ******** 2025-03-27 01:11:41.949599 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949607 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949615 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949622 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.949629 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949636 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.949643 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949650 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.949656 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949663 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.949670 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-03-27 01:11:41.949677 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.949690 | orchestrator | 2025-03-27 01:11:41.949697 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-03-27 01:11:41.949704 | orchestrator | Thursday 27 March 2025 01:09:22 +0000 (0:00:06.927) 0:03:50.776 ******** 2025-03-27 01:11:41.949711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.949724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.949775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.949796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.949804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.949837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.949856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.949863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.949898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.949907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949918 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.949926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.949940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.949989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.949997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.950088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.950133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.950152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.950172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.950243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.950341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.950354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950361 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.950368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.950396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950451 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.950458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.950509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.950552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950570 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.950593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.950601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.950654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.950731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.950760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950779 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.950801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.950809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.950848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.950924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.950945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.950967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.950974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.950985 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.950996 | orchestrator | 2025-03-27 01:11:41.951004 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-03-27 01:11:41.951011 | orchestrator | Thursday 27 March 2025 01:09:26 +0000 (0:00:04.018) 0:03:54.794 ******** 2025-03-27 01:11:41.951018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.951042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.951137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.951240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.951289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.951411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.951549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-27 01:11:41.951571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.951706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.951732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.951781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.951813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.951860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.951870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-27 01:11:41.951888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-27 01:11:41.951928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-03-27 01:11:41.951969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.951983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:11:41.951991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:11:41.951998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.952005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-27 01:11:41.952018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-27 01:11:41.952025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-27 01:11:41.952036 | orchestrator | 2025-03-27 01:11:41.952043 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-03-27 01:11:41.952051 | orchestrator | Thursday 27 March 2025 01:09:30 +0000 (0:00:04.562) 0:03:59.356 ******** 2025-03-27 01:11:41.952058 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:11:41.952065 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:11:41.952074 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:11:41.952081 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:11:41.952088 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:11:41.952095 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:11:41.952102 | orchestrator | 2025-03-27 01:11:41.952109 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-03-27 01:11:41.952116 | orchestrator | Thursday 27 March 2025 01:09:31 +0000 (0:00:00.765) 0:04:00.122 ******** 2025-03-27 01:11:41.952123 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:11:41.952130 | orchestrator | 2025-03-27 01:11:41.952136 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-03-27 01:11:41.952143 | orchestrator | Thursday 27 March 2025 01:09:34 +0000 (0:00:02.835) 0:04:02.958 ******** 2025-03-27 01:11:41.952150 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:11:41.952157 | orchestrator | 2025-03-27 01:11:41.952164 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-03-27 01:11:41.952171 | orchestrator | Thursday 27 March 2025 01:09:36 +0000 (0:00:02.686) 0:04:05.644 ******** 2025-03-27 01:11:41.952178 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:11:41.952184 | orchestrator | 2025-03-27 01:11:41.952191 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952198 | orchestrator | Thursday 27 March 2025 01:10:16 +0000 (0:00:39.699) 0:04:45.344 ******** 2025-03-27 01:11:41.952205 | orchestrator | 2025-03-27 01:11:41.952212 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952218 | orchestrator | Thursday 27 March 2025 01:10:16 +0000 (0:00:00.143) 0:04:45.488 ******** 2025-03-27 01:11:41.952225 | orchestrator | 2025-03-27 01:11:41.952232 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952239 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.299) 0:04:45.787 ******** 2025-03-27 01:11:41.952246 | orchestrator | 2025-03-27 01:11:41.952252 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952259 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.066) 0:04:45.853 ******** 2025-03-27 01:11:41.952266 | orchestrator | 2025-03-27 01:11:41.952273 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952280 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.066) 0:04:45.919 ******** 2025-03-27 01:11:41.952287 | orchestrator | 2025-03-27 01:11:41.952293 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-03-27 01:11:41.952300 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.061) 0:04:45.981 ******** 2025-03-27 01:11:41.952307 | orchestrator | 2025-03-27 01:11:41.952314 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-03-27 01:11:41.952320 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.291) 0:04:46.273 ******** 2025-03-27 01:11:41.952327 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:11:41.952334 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:11:41.952341 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:11:41.952348 | orchestrator | 2025-03-27 01:11:41.952355 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-03-27 01:11:41.952365 | orchestrator | Thursday 27 March 2025 01:10:45 +0000 (0:00:27.689) 0:05:13.962 ******** 2025-03-27 01:11:41.952372 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:11:41.952379 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:11:41.952385 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:11:41.952392 | orchestrator | 2025-03-27 01:11:41.952399 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:11:41.952406 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-27 01:11:41.952414 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-03-27 01:11:41.952421 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-03-27 01:11:41.952431 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-27 01:11:41.952438 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-27 01:11:41.952445 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-03-27 01:11:41.952452 | orchestrator | 2025-03-27 01:11:41.952459 | orchestrator | 2025-03-27 01:11:41.952466 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:11:41.952473 | orchestrator | Thursday 27 March 2025 01:11:38 +0000 (0:00:53.089) 0:06:07.052 ******** 2025-03-27 01:11:41.952480 | orchestrator | =============================================================================== 2025-03-27 01:11:41.952486 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.09s 2025-03-27 01:11:41.952493 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.70s 2025-03-27 01:11:41.952500 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.69s 2025-03-27 01:11:41.952507 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 10.66s 2025-03-27 01:11:41.952514 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 9.78s 2025-03-27 01:11:41.952559 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.30s 2025-03-27 01:11:41.952571 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 9.26s 2025-03-27 01:11:44.964750 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.89s 2025-03-27 01:11:44.964868 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 7.86s 2025-03-27 01:11:44.964907 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.59s 2025-03-27 01:11:44.964923 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 7.42s 2025-03-27 01:11:44.964937 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 6.93s 2025-03-27 01:11:44.964951 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 6.64s 2025-03-27 01:11:44.965083 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 6.02s 2025-03-27 01:11:44.965101 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.90s 2025-03-27 01:11:44.965115 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.81s 2025-03-27 01:11:44.965129 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.56s 2025-03-27 01:11:44.965143 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.54s 2025-03-27 01:11:44.965157 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 5.49s 2025-03-27 01:11:44.965200 | orchestrator | Load and persist kernel modules ----------------------------------------- 5.42s 2025-03-27 01:11:44.965215 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:44.965230 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:44.965244 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:44.965258 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:44.965272 | orchestrator | 2025-03-27 01:11:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:44.965286 | orchestrator | 2025-03-27 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:44.965318 | orchestrator | 2025-03-27 01:11:44 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:44.965733 | orchestrator | 2025-03-27 01:11:44 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:44.965773 | orchestrator | 2025-03-27 01:11:44 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:44.966416 | orchestrator | 2025-03-27 01:11:44 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:44.967180 | orchestrator | 2025-03-27 01:11:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:44.967268 | orchestrator | 2025-03-27 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:48.000944 | orchestrator | 2025-03-27 01:11:47 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:48.002113 | orchestrator | 2025-03-27 01:11:48 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:48.004589 | orchestrator | 2025-03-27 01:11:48 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:48.005384 | orchestrator | 2025-03-27 01:11:48 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:48.006670 | orchestrator | 2025-03-27 01:11:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:48.007326 | orchestrator | 2025-03-27 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:51.062246 | orchestrator | 2025-03-27 01:11:51 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:51.065982 | orchestrator | 2025-03-27 01:11:51 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:51.067648 | orchestrator | 2025-03-27 01:11:51 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:51.068707 | orchestrator | 2025-03-27 01:11:51 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:51.069461 | orchestrator | 2025-03-27 01:11:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:51.069658 | orchestrator | 2025-03-27 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:54.128260 | orchestrator | 2025-03-27 01:11:54 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:54.129739 | orchestrator | 2025-03-27 01:11:54 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:54.130346 | orchestrator | 2025-03-27 01:11:54 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:54.130990 | orchestrator | 2025-03-27 01:11:54 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:54.131936 | orchestrator | 2025-03-27 01:11:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:11:57.168560 | orchestrator | 2025-03-27 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:11:57.168723 | orchestrator | 2025-03-27 01:11:57 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:11:57.168917 | orchestrator | 2025-03-27 01:11:57 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:11:57.169046 | orchestrator | 2025-03-27 01:11:57 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:11:57.169309 | orchestrator | 2025-03-27 01:11:57 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:11:57.169794 | orchestrator | 2025-03-27 01:11:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:00.197502 | orchestrator | 2025-03-27 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:00.197688 | orchestrator | 2025-03-27 01:12:00 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:00.198002 | orchestrator | 2025-03-27 01:12:00 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:00.198120 | orchestrator | 2025-03-27 01:12:00 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:00.198664 | orchestrator | 2025-03-27 01:12:00 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:00.199191 | orchestrator | 2025-03-27 01:12:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:03.229662 | orchestrator | 2025-03-27 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:03.229787 | orchestrator | 2025-03-27 01:12:03 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:03.230586 | orchestrator | 2025-03-27 01:12:03 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:03.230620 | orchestrator | 2025-03-27 01:12:03 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:03.230983 | orchestrator | 2025-03-27 01:12:03 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:03.231601 | orchestrator | 2025-03-27 01:12:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:06.269176 | orchestrator | 2025-03-27 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:06.269302 | orchestrator | 2025-03-27 01:12:06 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:06.269479 | orchestrator | 2025-03-27 01:12:06 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:06.270492 | orchestrator | 2025-03-27 01:12:06 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:06.271549 | orchestrator | 2025-03-27 01:12:06 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:06.272426 | orchestrator | 2025-03-27 01:12:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:09.308476 | orchestrator | 2025-03-27 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:09.308639 | orchestrator | 2025-03-27 01:12:09 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:09.309321 | orchestrator | 2025-03-27 01:12:09 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:09.309384 | orchestrator | 2025-03-27 01:12:09 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:09.309990 | orchestrator | 2025-03-27 01:12:09 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:09.311295 | orchestrator | 2025-03-27 01:12:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:09.312334 | orchestrator | 2025-03-27 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:12.343429 | orchestrator | 2025-03-27 01:12:12 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:12.345990 | orchestrator | 2025-03-27 01:12:12 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:12.346669 | orchestrator | 2025-03-27 01:12:12 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:12.347677 | orchestrator | 2025-03-27 01:12:12 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:12.356105 | orchestrator | 2025-03-27 01:12:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:15.383190 | orchestrator | 2025-03-27 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:15.383421 | orchestrator | 2025-03-27 01:12:15 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:15.383801 | orchestrator | 2025-03-27 01:12:15 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:15.383855 | orchestrator | 2025-03-27 01:12:15 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:15.384606 | orchestrator | 2025-03-27 01:12:15 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:15.388381 | orchestrator | 2025-03-27 01:12:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:18.416893 | orchestrator | 2025-03-27 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:18.417043 | orchestrator | 2025-03-27 01:12:18 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:18.417474 | orchestrator | 2025-03-27 01:12:18 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:18.417511 | orchestrator | 2025-03-27 01:12:18 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:18.418375 | orchestrator | 2025-03-27 01:12:18 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:18.418781 | orchestrator | 2025-03-27 01:12:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:21.462960 | orchestrator | 2025-03-27 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:21.463096 | orchestrator | 2025-03-27 01:12:21 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:21.463428 | orchestrator | 2025-03-27 01:12:21 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:21.463462 | orchestrator | 2025-03-27 01:12:21 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:21.463987 | orchestrator | 2025-03-27 01:12:21 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:21.464792 | orchestrator | 2025-03-27 01:12:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:24.501086 | orchestrator | 2025-03-27 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:24.501313 | orchestrator | 2025-03-27 01:12:24 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:24.501951 | orchestrator | 2025-03-27 01:12:24 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:24.501984 | orchestrator | 2025-03-27 01:12:24 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:24.502459 | orchestrator | 2025-03-27 01:12:24 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:24.503077 | orchestrator | 2025-03-27 01:12:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:27.537157 | orchestrator | 2025-03-27 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:27.537287 | orchestrator | 2025-03-27 01:12:27 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:27.538601 | orchestrator | 2025-03-27 01:12:27 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:27.539166 | orchestrator | 2025-03-27 01:12:27 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:27.539503 | orchestrator | 2025-03-27 01:12:27 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:27.540723 | orchestrator | 2025-03-27 01:12:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:30.587059 | orchestrator | 2025-03-27 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:30.587198 | orchestrator | 2025-03-27 01:12:30 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:30.588089 | orchestrator | 2025-03-27 01:12:30 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:30.588122 | orchestrator | 2025-03-27 01:12:30 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:30.589829 | orchestrator | 2025-03-27 01:12:30 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:30.590917 | orchestrator | 2025-03-27 01:12:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:33.634486 | orchestrator | 2025-03-27 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:33.634670 | orchestrator | 2025-03-27 01:12:33 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:33.639010 | orchestrator | 2025-03-27 01:12:33 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:33.641070 | orchestrator | 2025-03-27 01:12:33 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:33.642824 | orchestrator | 2025-03-27 01:12:33 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:33.644046 | orchestrator | 2025-03-27 01:12:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:36.698201 | orchestrator | 2025-03-27 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:36.698327 | orchestrator | 2025-03-27 01:12:36 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:36.698724 | orchestrator | 2025-03-27 01:12:36 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:36.698751 | orchestrator | 2025-03-27 01:12:36 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:36.698773 | orchestrator | 2025-03-27 01:12:36 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:36.699714 | orchestrator | 2025-03-27 01:12:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:39.750334 | orchestrator | 2025-03-27 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:39.750458 | orchestrator | 2025-03-27 01:12:39 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:39.751356 | orchestrator | 2025-03-27 01:12:39 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:39.751388 | orchestrator | 2025-03-27 01:12:39 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:39.753052 | orchestrator | 2025-03-27 01:12:39 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:39.760847 | orchestrator | 2025-03-27 01:12:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:42.806592 | orchestrator | 2025-03-27 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:42.806713 | orchestrator | 2025-03-27 01:12:42 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:42.807037 | orchestrator | 2025-03-27 01:12:42 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:42.809412 | orchestrator | 2025-03-27 01:12:42 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:45.859996 | orchestrator | 2025-03-27 01:12:42 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:45.860112 | orchestrator | 2025-03-27 01:12:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:45.860130 | orchestrator | 2025-03-27 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:45.860163 | orchestrator | 2025-03-27 01:12:45 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:45.865776 | orchestrator | 2025-03-27 01:12:45 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:48.907303 | orchestrator | 2025-03-27 01:12:45 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:48.907417 | orchestrator | 2025-03-27 01:12:45 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:48.907452 | orchestrator | 2025-03-27 01:12:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:48.907467 | orchestrator | 2025-03-27 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:48.907496 | orchestrator | 2025-03-27 01:12:48 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:48.909306 | orchestrator | 2025-03-27 01:12:48 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:48.911867 | orchestrator | 2025-03-27 01:12:48 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:48.913803 | orchestrator | 2025-03-27 01:12:48 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:48.915504 | orchestrator | 2025-03-27 01:12:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:48.915791 | orchestrator | 2025-03-27 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:51.960249 | orchestrator | 2025-03-27 01:12:51 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:51.960607 | orchestrator | 2025-03-27 01:12:51 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:51.961693 | orchestrator | 2025-03-27 01:12:51 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:51.964064 | orchestrator | 2025-03-27 01:12:51 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:51.964612 | orchestrator | 2025-03-27 01:12:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:51.965447 | orchestrator | 2025-03-27 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:55.009981 | orchestrator | 2025-03-27 01:12:55 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:55.011020 | orchestrator | 2025-03-27 01:12:55 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:55.012375 | orchestrator | 2025-03-27 01:12:55 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:55.014510 | orchestrator | 2025-03-27 01:12:55 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:55.015450 | orchestrator | 2025-03-27 01:12:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:12:55.015764 | orchestrator | 2025-03-27 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:12:58.069991 | orchestrator | 2025-03-27 01:12:58 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:12:58.070969 | orchestrator | 2025-03-27 01:12:58 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:12:58.072329 | orchestrator | 2025-03-27 01:12:58 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:12:58.073425 | orchestrator | 2025-03-27 01:12:58 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:12:58.074717 | orchestrator | 2025-03-27 01:12:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:01.129501 | orchestrator | 2025-03-27 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:01.129696 | orchestrator | 2025-03-27 01:13:01 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:01.133033 | orchestrator | 2025-03-27 01:13:01 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:01.134739 | orchestrator | 2025-03-27 01:13:01 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:01.136236 | orchestrator | 2025-03-27 01:13:01 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:01.137741 | orchestrator | 2025-03-27 01:13:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:04.200820 | orchestrator | 2025-03-27 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:04.200958 | orchestrator | 2025-03-27 01:13:04 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:04.202131 | orchestrator | 2025-03-27 01:13:04 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:04.202165 | orchestrator | 2025-03-27 01:13:04 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:04.202702 | orchestrator | 2025-03-27 01:13:04 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:04.203927 | orchestrator | 2025-03-27 01:13:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:04.204068 | orchestrator | 2025-03-27 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:07.251197 | orchestrator | 2025-03-27 01:13:07 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:07.252379 | orchestrator | 2025-03-27 01:13:07 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:07.253392 | orchestrator | 2025-03-27 01:13:07 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:07.255064 | orchestrator | 2025-03-27 01:13:07 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:07.255897 | orchestrator | 2025-03-27 01:13:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:10.311912 | orchestrator | 2025-03-27 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:10.312052 | orchestrator | 2025-03-27 01:13:10 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:10.315836 | orchestrator | 2025-03-27 01:13:10 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:10.318213 | orchestrator | 2025-03-27 01:13:10 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:10.320128 | orchestrator | 2025-03-27 01:13:10 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:10.322515 | orchestrator | 2025-03-27 01:13:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:10.322858 | orchestrator | 2025-03-27 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:13.371305 | orchestrator | 2025-03-27 01:13:13 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:13.371972 | orchestrator | 2025-03-27 01:13:13 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:13.377593 | orchestrator | 2025-03-27 01:13:13 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:13.379714 | orchestrator | 2025-03-27 01:13:13 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:13.383043 | orchestrator | 2025-03-27 01:13:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:16.436480 | orchestrator | 2025-03-27 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:16.436654 | orchestrator | 2025-03-27 01:13:16 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:16.438135 | orchestrator | 2025-03-27 01:13:16 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:16.441478 | orchestrator | 2025-03-27 01:13:16 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:16.442117 | orchestrator | 2025-03-27 01:13:16 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:16.444219 | orchestrator | 2025-03-27 01:13:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:16.444959 | orchestrator | 2025-03-27 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:19.480476 | orchestrator | 2025-03-27 01:13:19 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:19.481393 | orchestrator | 2025-03-27 01:13:19 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:19.482110 | orchestrator | 2025-03-27 01:13:19 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:19.482973 | orchestrator | 2025-03-27 01:13:19 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:19.484052 | orchestrator | 2025-03-27 01:13:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:22.534755 | orchestrator | 2025-03-27 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:22.534891 | orchestrator | 2025-03-27 01:13:22 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:22.537462 | orchestrator | 2025-03-27 01:13:22 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:22.539416 | orchestrator | 2025-03-27 01:13:22 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:22.541087 | orchestrator | 2025-03-27 01:13:22 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:22.543990 | orchestrator | 2025-03-27 01:13:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:22.544419 | orchestrator | 2025-03-27 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:25.600199 | orchestrator | 2025-03-27 01:13:25 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:25.601344 | orchestrator | 2025-03-27 01:13:25 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:25.602507 | orchestrator | 2025-03-27 01:13:25 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:25.603852 | orchestrator | 2025-03-27 01:13:25 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:25.605552 | orchestrator | 2025-03-27 01:13:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:28.658483 | orchestrator | 2025-03-27 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:28.658666 | orchestrator | 2025-03-27 01:13:28 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:28.660593 | orchestrator | 2025-03-27 01:13:28 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state STARTED 2025-03-27 01:13:28.663647 | orchestrator | 2025-03-27 01:13:28 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:28.665810 | orchestrator | 2025-03-27 01:13:28 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:28.667133 | orchestrator | 2025-03-27 01:13:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:31.719566 | orchestrator | 2025-03-27 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:31.719712 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:31.723210 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task d75e4b59-0225-488c-aff5-c614ba5029b4 is in state SUCCESS 2025-03-27 01:13:31.724761 | orchestrator | 2025-03-27 01:13:31.724802 | orchestrator | 2025-03-27 01:13:31.724818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:13:31.724833 | orchestrator | 2025-03-27 01:13:31.724848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:13:31.724946 | orchestrator | Thursday 27 March 2025 01:08:19 +0000 (0:00:01.191) 0:00:01.191 ******** 2025-03-27 01:13:31.725379 | orchestrator | ok: [testbed-manager] 2025-03-27 01:13:31.725403 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:13:31.725418 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:13:31.725433 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:13:31.725448 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:13:31.725463 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:13:31.725477 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:13:31.725491 | orchestrator | 2025-03-27 01:13:31.725506 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:13:31.725520 | orchestrator | Thursday 27 March 2025 01:08:20 +0000 (0:00:01.746) 0:00:02.937 ******** 2025-03-27 01:13:31.726103 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726126 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726140 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726154 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726169 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726183 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726197 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-03-27 01:13:31.726210 | orchestrator | 2025-03-27 01:13:31.726224 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-03-27 01:13:31.726238 | orchestrator | 2025-03-27 01:13:31.726252 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-03-27 01:13:31.726266 | orchestrator | Thursday 27 March 2025 01:08:22 +0000 (0:00:01.422) 0:00:04.360 ******** 2025-03-27 01:13:31.726569 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:13:31.726587 | orchestrator | 2025-03-27 01:13:31.726601 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-03-27 01:13:31.726615 | orchestrator | Thursday 27 March 2025 01:08:23 +0000 (0:00:01.318) 0:00:05.679 ******** 2025-03-27 01:13:31.726632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.726651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.726666 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 01:13:31.727861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.728321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.728350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.728492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.728506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.728519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.728567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.728701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.728726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.728739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.728874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.728887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.728900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.728913 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 01:13:31.728927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.729063 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729090 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.729115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.729223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.729237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.729376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.729410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.729435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.729449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.729580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729593 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.729616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.729630 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.729650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.729744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.729803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.729826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.729846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.729958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.729971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.729994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.730172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.730215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.730373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.730386 | orchestrator | 2025-03-27 01:13:31.730399 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-03-27 01:13:31.730412 | orchestrator | Thursday 27 March 2025 01:08:27 +0000 (0:00:04.228) 0:00:09.907 ******** 2025-03-27 01:13:31.730439 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:13:31.730452 | orchestrator | 2025-03-27 01:13:31.730464 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-03-27 01:13:31.730476 | orchestrator | Thursday 27 March 2025 01:08:31 +0000 (0:00:03.938) 0:00:13.846 ******** 2025-03-27 01:13:31.730500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 01:13:31.730522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730744 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.730863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.730908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 01:13:31.730930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.730973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731108 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.731128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.731241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.731259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.731282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.731302 | orchestrator | 2025-03-27 01:13:31.731315 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-03-27 01:13:31.731328 | orchestrator | Thursday 27 March 2025 01:08:39 +0000 (0:00:07.829) 0:00:21.676 ******** 2025-03-27 01:13:31.731340 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.731353 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.731378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.731454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.731473 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731486 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.731510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.731586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.731628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731641 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.731654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.731734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.731820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.731834 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.731848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.731862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.731875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.731888 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.731901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.731991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732032 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.732045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732111 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.732172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732230 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.732241 | orchestrator | 2025-03-27 01:13:31.732252 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-03-27 01:13:31.732274 | orchestrator | Thursday 27 March 2025 01:08:42 +0000 (0:00:02.642) 0:00:24.318 ******** 2025-03-27 01:13:31.732286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.732298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732309 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.732412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732423 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.732434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732494 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.732508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732659 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.732670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.732800 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.732815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732849 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.732859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732908 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.732968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-27 01:13:31.732984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.732995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.733007 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.733018 | orchestrator | 2025-03-27 01:13:31.733029 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-03-27 01:13:31.733040 | orchestrator | Thursday 27 March 2025 01:08:47 +0000 (0:00:04.973) 0:00:29.292 ******** 2025-03-27 01:13:31.733051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.733200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733234 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 01:13:31.733247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733377 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.733418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.733524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.733637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.733660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.733672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733724 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.733792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.733809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.733892 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 01:13:31.733906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.733936 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.733983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.733993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.734006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.734037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.734054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.734085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.734096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.734110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.734119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.734136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.734164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.734189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.734198 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.734207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.734231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.734270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.734293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.734311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.734335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.734374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.734388 | orchestrator | 2025-03-27 01:13:31.734397 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-03-27 01:13:31.734409 | orchestrator | Thursday 27 March 2025 01:08:56 +0000 (0:00:09.653) 0:00:38.945 ******** 2025-03-27 01:13:31.734418 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 01:13:31.734426 | orchestrator | 2025-03-27 01:13:31.734435 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-03-27 01:13:31.734443 | orchestrator | Thursday 27 March 2025 01:08:57 +0000 (0:00:00.844) 0:00:39.790 ******** 2025-03-27 01:13:31.734452 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734461 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734486 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734513 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734555 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734565 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.734574 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1329383, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734583 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734599 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734608 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734638 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734655 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734664 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734673 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734688 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734697 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734706 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734759 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1329400, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.734774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734784 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734793 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734801 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734810 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734843 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734854 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734872 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734882 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734891 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734899 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734914 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734942 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734977 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.734995 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1329387, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7236466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735009 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735037 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735054 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735072 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735081 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735104 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735148 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735166 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735175 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735184 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735197 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735241 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735250 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735259 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735283 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1329397, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735292 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.735307 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735335 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735353 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.735362 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735371 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735393 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.735402 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735411 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.735426 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735453 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.735464 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735473 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-03-27 01:13:31.735481 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.735490 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329449, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7526486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735499 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1329406, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.728647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735513 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1329394, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7256467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1329404, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.726647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329446, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7516487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735583 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1329391, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7246468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1329440, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.7476482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-27 01:13:31.735601 | orchestrator | 2025-03-27 01:13:31.735609 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-03-27 01:13:31.735618 | orchestrator | Thursday 27 March 2025 01:09:44 +0000 (0:00:46.739) 0:01:26.529 ******** 2025-03-27 01:13:31.735626 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 01:13:31.735635 | orchestrator | 2025-03-27 01:13:31.735643 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-03-27 01:13:31.735652 | orchestrator | Thursday 27 March 2025 01:09:44 +0000 (0:00:00.472) 0:01:27.001 ******** 2025-03-27 01:13:31.735660 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735669 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735685 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735693 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735702 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735710 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 01:13:31.735719 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735736 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735753 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735761 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:13:31.735770 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735779 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735787 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735804 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735813 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-27 01:13:31.735821 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735838 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735855 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735864 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-27 01:13:31.735872 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735889 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735906 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735915 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735932 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735949 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.735957 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.735966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735974 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-03-27 01:13:31.735983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-03-27 01:13:31.735991 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-03-27 01:13:31.736000 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 01:13:31.736008 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-27 01:13:31.736035 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-27 01:13:31.736045 | orchestrator | 2025-03-27 01:13:31.736054 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-03-27 01:13:31.736062 | orchestrator | Thursday 27 March 2025 01:09:47 +0000 (0:00:02.529) 0:01:29.531 ******** 2025-03-27 01:13:31.736071 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736079 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736088 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736102 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736110 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736119 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736128 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736136 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736145 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736153 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736162 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-03-27 01:13:31.736170 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736178 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-03-27 01:13:31.736187 | orchestrator | 2025-03-27 01:13:31.736195 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-03-27 01:13:31.736207 | orchestrator | Thursday 27 March 2025 01:10:06 +0000 (0:00:19.409) 0:01:48.940 ******** 2025-03-27 01:13:31.736216 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736225 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736233 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736242 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736259 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736267 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736276 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736284 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736293 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736301 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-03-27 01:13:31.736309 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736318 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-03-27 01:13:31.736327 | orchestrator | 2025-03-27 01:13:31.736335 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-03-27 01:13:31.736344 | orchestrator | Thursday 27 March 2025 01:10:12 +0000 (0:00:05.532) 0:01:54.472 ******** 2025-03-27 01:13:31.736352 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736361 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736370 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736378 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736387 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736396 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736404 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736413 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736421 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736430 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736442 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-03-27 01:13:31.736451 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736460 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-03-27 01:13:31.736468 | orchestrator | 2025-03-27 01:13:31.736477 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-03-27 01:13:31.736485 | orchestrator | Thursday 27 March 2025 01:10:16 +0000 (0:00:04.120) 0:01:58.592 ******** 2025-03-27 01:13:31.736494 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 01:13:31.736502 | orchestrator | 2025-03-27 01:13:31.736511 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-03-27 01:13:31.736519 | orchestrator | Thursday 27 March 2025 01:10:17 +0000 (0:00:00.673) 0:01:59.266 ******** 2025-03-27 01:13:31.736563 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.736578 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736587 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736595 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736604 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736612 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736621 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736629 | orchestrator | 2025-03-27 01:13:31.736638 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-03-27 01:13:31.736650 | orchestrator | Thursday 27 March 2025 01:10:18 +0000 (0:00:00.938) 0:02:00.204 ******** 2025-03-27 01:13:31.736658 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.736667 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736676 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736684 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736692 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.736701 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.736709 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.736718 | orchestrator | 2025-03-27 01:13:31.736726 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-03-27 01:13:31.736735 | orchestrator | Thursday 27 March 2025 01:10:23 +0000 (0:00:05.757) 0:02:05.962 ******** 2025-03-27 01:13:31.736743 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736752 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736771 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736789 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736798 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736807 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736815 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736824 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736833 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736841 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736848 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-03-27 01:13:31.736856 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.736864 | orchestrator | 2025-03-27 01:13:31.736872 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-03-27 01:13:31.736880 | orchestrator | Thursday 27 March 2025 01:10:29 +0000 (0:00:05.748) 0:02:11.710 ******** 2025-03-27 01:13:31.736888 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736901 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.736909 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736917 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.736925 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736933 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.736944 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736953 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.736960 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736968 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.736976 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-03-27 01:13:31.736984 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.736992 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-03-27 01:13:31.737000 | orchestrator | 2025-03-27 01:13:31.737008 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-03-27 01:13:31.737016 | orchestrator | Thursday 27 March 2025 01:10:35 +0000 (0:00:06.155) 0:02:17.865 ******** 2025-03-27 01:13:31.737024 | orchestrator | [WARNING]: Skipped 2025-03-27 01:13:31.737032 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-03-27 01:13:31.737040 | orchestrator | due to this access issue: 2025-03-27 01:13:31.737048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-03-27 01:13:31.737056 | orchestrator | not a directory 2025-03-27 01:13:31.737068 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-27 01:13:31.737076 | orchestrator | 2025-03-27 01:13:31.737084 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-03-27 01:13:31.737092 | orchestrator | Thursday 27 March 2025 01:10:37 +0000 (0:00:02.198) 0:02:20.064 ******** 2025-03-27 01:13:31.737100 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.737108 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.737116 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.737124 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.737132 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.737139 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.737147 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.737155 | orchestrator | 2025-03-27 01:13:31.737163 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-03-27 01:13:31.737171 | orchestrator | Thursday 27 March 2025 01:10:39 +0000 (0:00:01.403) 0:02:21.468 ******** 2025-03-27 01:13:31.737183 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.737192 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.737199 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.737207 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.737215 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.737223 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.737231 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.737239 | orchestrator | 2025-03-27 01:13:31.737247 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-03-27 01:13:31.737255 | orchestrator | Thursday 27 March 2025 01:10:40 +0000 (0:00:01.169) 0:02:22.637 ******** 2025-03-27 01:13:31.737263 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737271 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.737279 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737291 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.737299 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737307 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.737315 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737323 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.737331 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737339 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.737347 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737355 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.737363 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-03-27 01:13:31.737371 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.737379 | orchestrator | 2025-03-27 01:13:31.737387 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-03-27 01:13:31.737395 | orchestrator | Thursday 27 March 2025 01:10:45 +0000 (0:00:04.925) 0:02:27.563 ******** 2025-03-27 01:13:31.737403 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737411 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:31.737418 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737426 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:31.737434 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737442 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:31.737450 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737458 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:31.737466 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737474 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:31.737482 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737490 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:31.737498 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-03-27 01:13:31.737506 | orchestrator | skipping: [testbed-manager] 2025-03-27 01:13:31.737514 | orchestrator | 2025-03-27 01:13:31.737522 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-03-27 01:13:31.737543 | orchestrator | Thursday 27 March 2025 01:10:51 +0000 (0:00:06.605) 0:02:34.168 ******** 2025-03-27 01:13:31.737552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-27 01:13:31.737606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-27 01:13:31.737662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737670 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.737766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.737780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.737794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.737802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-03-27 01:13:31.737814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.737839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.737857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.737870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.737879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.737887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.737895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.737977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.737986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.737994 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-27 01:13:31.738012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.738041 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.738062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.738078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.738091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.738167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.738175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.738209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.738230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.738242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.738250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-03-27 01:13:31.738272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-27 01:13:31.738296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-27 01:13:31.738308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.738316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.738324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.738359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.738376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-27 01:13:31.738396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-27 01:13:31.738418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-27 01:13:31.738430 | orchestrator | 2025-03-27 01:13:31.738438 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-03-27 01:13:31.738447 | orchestrator | Thursday 27 March 2025 01:11:00 +0000 (0:00:08.413) 0:02:42.582 ******** 2025-03-27 01:13:31.738455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-27 01:13:31.738463 | orchestrator | 2025-03-27 01:13:31.738471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738479 | orchestrator | Thursday 27 March 2025 01:11:03 +0000 (0:00:03.525) 0:02:46.108 ******** 2025-03-27 01:13:31.738486 | orchestrator | 2025-03-27 01:13:31.738494 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738502 | orchestrator | Thursday 27 March 2025 01:11:03 +0000 (0:00:00.059) 0:02:46.167 ******** 2025-03-27 01:13:31.738510 | orchestrator | 2025-03-27 01:13:31.738518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738526 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.191) 0:02:46.359 ******** 2025-03-27 01:13:31.738546 | orchestrator | 2025-03-27 01:13:31.738555 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738562 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.058) 0:02:46.418 ******** 2025-03-27 01:13:31.738570 | orchestrator | 2025-03-27 01:13:31.738578 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738586 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.056) 0:02:46.475 ******** 2025-03-27 01:13:31.738594 | orchestrator | 2025-03-27 01:13:31.738602 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738613 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.061) 0:02:46.537 ******** 2025-03-27 01:13:31.738621 | orchestrator | 2025-03-27 01:13:31.738629 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-03-27 01:13:31.738637 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.208) 0:02:46.745 ******** 2025-03-27 01:13:31.738645 | orchestrator | 2025-03-27 01:13:31.738653 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-03-27 01:13:31.738661 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:00.121) 0:02:46.867 ******** 2025-03-27 01:13:31.738668 | orchestrator | changed: [testbed-manager] 2025-03-27 01:13:31.738676 | orchestrator | 2025-03-27 01:13:31.738684 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-03-27 01:13:31.738692 | orchestrator | Thursday 27 March 2025 01:11:25 +0000 (0:00:21.271) 0:03:08.138 ******** 2025-03-27 01:13:31.738700 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:31.738708 | orchestrator | changed: [testbed-manager] 2025-03-27 01:13:31.738716 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.738724 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.738731 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.738739 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:31.738747 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:31.738758 | orchestrator | 2025-03-27 01:13:31.738766 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-03-27 01:13:31.738777 | orchestrator | Thursday 27 March 2025 01:11:51 +0000 (0:00:25.992) 0:03:34.130 ******** 2025-03-27 01:13:31.738786 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.738793 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.738801 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.738809 | orchestrator | 2025-03-27 01:13:31.738817 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-03-27 01:13:31.738830 | orchestrator | Thursday 27 March 2025 01:12:04 +0000 (0:00:12.591) 0:03:46.722 ******** 2025-03-27 01:13:31.738838 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.738846 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.738854 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.738862 | orchestrator | 2025-03-27 01:13:31.738869 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-03-27 01:13:31.738877 | orchestrator | Thursday 27 March 2025 01:12:16 +0000 (0:00:12.263) 0:03:58.985 ******** 2025-03-27 01:13:31.738885 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.738893 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.738901 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:31.738908 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.738916 | orchestrator | changed: [testbed-manager] 2025-03-27 01:13:31.738924 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:31.738932 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:31.738940 | orchestrator | 2025-03-27 01:13:31.738948 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-03-27 01:13:31.738956 | orchestrator | Thursday 27 March 2025 01:12:34 +0000 (0:00:17.978) 0:04:16.964 ******** 2025-03-27 01:13:31.738963 | orchestrator | changed: [testbed-manager] 2025-03-27 01:13:31.738971 | orchestrator | 2025-03-27 01:13:31.738979 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-03-27 01:13:31.738987 | orchestrator | Thursday 27 March 2025 01:12:51 +0000 (0:00:16.662) 0:04:33.626 ******** 2025-03-27 01:13:31.738995 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:31.739003 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:31.739010 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:31.739018 | orchestrator | 2025-03-27 01:13:31.739026 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-03-27 01:13:31.739034 | orchestrator | Thursday 27 March 2025 01:13:04 +0000 (0:00:12.608) 0:04:46.234 ******** 2025-03-27 01:13:31.739042 | orchestrator | changed: [testbed-manager] 2025-03-27 01:13:31.739050 | orchestrator | 2025-03-27 01:13:31.739057 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-03-27 01:13:31.739065 | orchestrator | Thursday 27 March 2025 01:13:14 +0000 (0:00:10.368) 0:04:56.603 ******** 2025-03-27 01:13:31.739073 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:31.739081 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:31.739089 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:31.739097 | orchestrator | 2025-03-27 01:13:31.739105 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:13:31.739113 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-03-27 01:13:31.739121 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-27 01:13:31.739129 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-27 01:13:31.739137 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-03-27 01:13:31.739145 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-27 01:13:31.739153 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-27 01:13:31.739161 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-03-27 01:13:31.739172 | orchestrator | 2025-03-27 01:13:31.739180 | orchestrator | 2025-03-27 01:13:31.739188 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:13:31.739196 | orchestrator | Thursday 27 March 2025 01:13:29 +0000 (0:00:15.566) 0:05:12.169 ******** 2025-03-27 01:13:31.739204 | orchestrator | =============================================================================== 2025-03-27 01:13:31.739212 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 46.74s 2025-03-27 01:13:31.739220 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 25.99s 2025-03-27 01:13:31.739228 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.27s 2025-03-27 01:13:31.739235 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.41s 2025-03-27 01:13:31.739243 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.98s 2025-03-27 01:13:31.739251 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 16.66s 2025-03-27 01:13:31.739262 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 15.57s 2025-03-27 01:13:31.739270 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.61s 2025-03-27 01:13:31.739278 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.59s 2025-03-27 01:13:31.739286 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.26s 2025-03-27 01:13:31.739296 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.37s 2025-03-27 01:13:34.771026 | orchestrator | prometheus : Copying over config.json files ----------------------------- 9.65s 2025-03-27 01:13:34.771142 | orchestrator | prometheus : Check prometheus containers -------------------------------- 8.41s 2025-03-27 01:13:34.771163 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.83s 2025-03-27 01:13:34.771178 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 6.61s 2025-03-27 01:13:34.771193 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 6.16s 2025-03-27 01:13:34.771207 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 5.76s 2025-03-27 01:13:34.771221 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 5.75s 2025-03-27 01:13:34.771235 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.53s 2025-03-27 01:13:34.771248 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.97s 2025-03-27 01:13:34.771263 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:34.771277 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:34.771291 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:34.771305 | orchestrator | 2025-03-27 01:13:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:34.771319 | orchestrator | 2025-03-27 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:34.771462 | orchestrator | 2025-03-27 01:13:34 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:34.772303 | orchestrator | 2025-03-27 01:13:34 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:34.772338 | orchestrator | 2025-03-27 01:13:34 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:34.773808 | orchestrator | 2025-03-27 01:13:34 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:34.775877 | orchestrator | 2025-03-27 01:13:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:37.821673 | orchestrator | 2025-03-27 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:37.821826 | orchestrator | 2025-03-27 01:13:37 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:37.823340 | orchestrator | 2025-03-27 01:13:37 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:37.824110 | orchestrator | 2025-03-27 01:13:37 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:37.825062 | orchestrator | 2025-03-27 01:13:37 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:37.827996 | orchestrator | 2025-03-27 01:13:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:37.828602 | orchestrator | 2025-03-27 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:40.873362 | orchestrator | 2025-03-27 01:13:40 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:40.873642 | orchestrator | 2025-03-27 01:13:40 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:40.875563 | orchestrator | 2025-03-27 01:13:40 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:40.876301 | orchestrator | 2025-03-27 01:13:40 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:40.877142 | orchestrator | 2025-03-27 01:13:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:40.877264 | orchestrator | 2025-03-27 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:43.916467 | orchestrator | 2025-03-27 01:13:43 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:43.916938 | orchestrator | 2025-03-27 01:13:43 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:43.918596 | orchestrator | 2025-03-27 01:13:43 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:43.919211 | orchestrator | 2025-03-27 01:13:43 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:43.920577 | orchestrator | 2025-03-27 01:13:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:46.963748 | orchestrator | 2025-03-27 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:46.963881 | orchestrator | 2025-03-27 01:13:46 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:46.966805 | orchestrator | 2025-03-27 01:13:46 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:46.968455 | orchestrator | 2025-03-27 01:13:46 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:46.970979 | orchestrator | 2025-03-27 01:13:46 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:46.973699 | orchestrator | 2025-03-27 01:13:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:50.025147 | orchestrator | 2025-03-27 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:50.025290 | orchestrator | 2025-03-27 01:13:50 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:50.027521 | orchestrator | 2025-03-27 01:13:50 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:50.029267 | orchestrator | 2025-03-27 01:13:50 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:50.033998 | orchestrator | 2025-03-27 01:13:50 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:50.034651 | orchestrator | 2025-03-27 01:13:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:53.083748 | orchestrator | 2025-03-27 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:53.083872 | orchestrator | 2025-03-27 01:13:53 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state STARTED 2025-03-27 01:13:53.087225 | orchestrator | 2025-03-27 01:13:53 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:53.089715 | orchestrator | 2025-03-27 01:13:53 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:53.091072 | orchestrator | 2025-03-27 01:13:53 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:53.092684 | orchestrator | 2025-03-27 01:13:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:53.092938 | orchestrator | 2025-03-27 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:56.151763 | orchestrator | 2025-03-27 01:13:56 | INFO  | Task dcab7ad3-4b16-4013-95c2-02b14149577a is in state SUCCESS 2025-03-27 01:13:56.153050 | orchestrator | 2025-03-27 01:13:56.153086 | orchestrator | 2025-03-27 01:13:56.153097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:13:56.153107 | orchestrator | 2025-03-27 01:13:56.153116 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:13:56.153179 | orchestrator | Thursday 27 March 2025 01:10:12 +0000 (0:00:00.328) 0:00:00.328 ******** 2025-03-27 01:13:56.153192 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:13:56.153203 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:13:56.153212 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:13:56.153221 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:13:56.153231 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:13:56.153240 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:13:56.153249 | orchestrator | 2025-03-27 01:13:56.153259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:13:56.153268 | orchestrator | Thursday 27 March 2025 01:10:13 +0000 (0:00:01.052) 0:00:01.380 ******** 2025-03-27 01:13:56.153277 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-03-27 01:13:56.153287 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-03-27 01:13:56.153296 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-03-27 01:13:56.153305 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-03-27 01:13:56.153314 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-03-27 01:13:56.153323 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-03-27 01:13:56.153332 | orchestrator | 2025-03-27 01:13:56.153341 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-03-27 01:13:56.153351 | orchestrator | 2025-03-27 01:13:56.153395 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-27 01:13:56.153456 | orchestrator | Thursday 27 March 2025 01:10:15 +0000 (0:00:01.716) 0:00:03.097 ******** 2025-03-27 01:13:56.153468 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:13:56.153478 | orchestrator | 2025-03-27 01:13:56.153488 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-03-27 01:13:56.153498 | orchestrator | Thursday 27 March 2025 01:10:16 +0000 (0:00:01.601) 0:00:04.699 ******** 2025-03-27 01:13:56.153849 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-03-27 01:13:56.153870 | orchestrator | 2025-03-27 01:13:56.153881 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-03-27 01:13:56.153891 | orchestrator | Thursday 27 March 2025 01:10:20 +0000 (0:00:03.705) 0:00:08.404 ******** 2025-03-27 01:13:56.153922 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-03-27 01:13:56.153933 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-03-27 01:13:56.153944 | orchestrator | 2025-03-27 01:13:56.153954 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-03-27 01:13:56.153965 | orchestrator | Thursday 27 March 2025 01:10:27 +0000 (0:00:07.263) 0:00:15.667 ******** 2025-03-27 01:13:56.153975 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:13:56.154011 | orchestrator | 2025-03-27 01:13:56.154059 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-03-27 01:13:56.154069 | orchestrator | Thursday 27 March 2025 01:10:32 +0000 (0:00:04.639) 0:00:20.307 ******** 2025-03-27 01:13:56.154078 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:13:56.154087 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-03-27 01:13:56.154097 | orchestrator | 2025-03-27 01:13:56.154106 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-03-27 01:13:56.154115 | orchestrator | Thursday 27 March 2025 01:10:36 +0000 (0:00:04.393) 0:00:24.700 ******** 2025-03-27 01:13:56.154124 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:13:56.154168 | orchestrator | 2025-03-27 01:13:56.154612 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-03-27 01:13:56.154677 | orchestrator | Thursday 27 March 2025 01:10:40 +0000 (0:00:03.692) 0:00:28.393 ******** 2025-03-27 01:13:56.154688 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-03-27 01:13:56.154698 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-03-27 01:13:56.154707 | orchestrator | 2025-03-27 01:13:56.154716 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-03-27 01:13:56.154725 | orchestrator | Thursday 27 March 2025 01:10:51 +0000 (0:00:11.101) 0:00:39.494 ******** 2025-03-27 01:13:56.154763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.154777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.154788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.154810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.154821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.154831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.155051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.155061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.155165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.155234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.155250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.155309 | orchestrator | 2025-03-27 01:13:56.155319 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-27 01:13:56.155328 | orchestrator | Thursday 27 March 2025 01:10:57 +0000 (0:00:05.297) 0:00:44.791 ******** 2025-03-27 01:13:56.155338 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.155347 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.155356 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.155366 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:13:56.155376 | orchestrator | 2025-03-27 01:13:56.155385 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-03-27 01:13:56.155394 | orchestrator | Thursday 27 March 2025 01:10:59 +0000 (0:00:02.249) 0:00:47.040 ******** 2025-03-27 01:13:56.155403 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-03-27 01:13:56.155413 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-03-27 01:13:56.155422 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-03-27 01:13:56.155431 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-03-27 01:13:56.155440 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-03-27 01:13:56.155449 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-03-27 01:13:56.155458 | orchestrator | 2025-03-27 01:13:56.155467 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-03-27 01:13:56.155476 | orchestrator | Thursday 27 March 2025 01:11:03 +0000 (0:00:03.798) 0:00:50.839 ******** 2025-03-27 01:13:56.155486 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155498 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155561 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155575 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155584 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155594 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-03-27 01:13:56.155604 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155643 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155666 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155679 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155690 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155722 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-03-27 01:13:56.155739 | orchestrator | 2025-03-27 01:13:56.155750 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-03-27 01:13:56.155760 | orchestrator | Thursday 27 March 2025 01:11:07 +0000 (0:00:04.658) 0:00:55.498 ******** 2025-03-27 01:13:56.155770 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:13:56.155781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:13:56.155792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:13:56.155802 | orchestrator | 2025-03-27 01:13:56.155813 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-03-27 01:13:56.155823 | orchestrator | Thursday 27 March 2025 01:11:10 +0000 (0:00:02.589) 0:00:58.087 ******** 2025-03-27 01:13:56.155833 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-03-27 01:13:56.155844 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-03-27 01:13:56.155855 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-03-27 01:13:56.155865 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-03-27 01:13:56.155875 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-03-27 01:13:56.155885 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-03-27 01:13:56.155896 | orchestrator | 2025-03-27 01:13:56.155906 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-03-27 01:13:56.155916 | orchestrator | Thursday 27 March 2025 01:11:14 +0000 (0:00:04.356) 0:01:02.444 ******** 2025-03-27 01:13:56.155927 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-03-27 01:13:56.155937 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-03-27 01:13:56.155948 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-03-27 01:13:56.155958 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-03-27 01:13:56.155969 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-03-27 01:13:56.155979 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-03-27 01:13:56.155987 | orchestrator | 2025-03-27 01:13:56.155997 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-03-27 01:13:56.156006 | orchestrator | Thursday 27 March 2025 01:11:16 +0000 (0:00:01.327) 0:01:03.771 ******** 2025-03-27 01:13:56.156015 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.156024 | orchestrator | 2025-03-27 01:13:56.156034 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-03-27 01:13:56.156043 | orchestrator | Thursday 27 March 2025 01:11:16 +0000 (0:00:00.125) 0:01:03.896 ******** 2025-03-27 01:13:56.156052 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.156061 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.156070 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.156079 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.156088 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.156098 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.156107 | orchestrator | 2025-03-27 01:13:56.156116 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-27 01:13:56.156125 | orchestrator | Thursday 27 March 2025 01:11:17 +0000 (0:00:01.114) 0:01:05.011 ******** 2025-03-27 01:13:56.156135 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:13:56.156151 | orchestrator | 2025-03-27 01:13:56.156160 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-03-27 01:13:56.156169 | orchestrator | Thursday 27 March 2025 01:11:19 +0000 (0:00:02.223) 0:01:07.235 ******** 2025-03-27 01:13:56.156179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.156216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.156228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.156248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.156379 | orchestrator | 2025-03-27 01:13:56.156388 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-03-27 01:13:56.156398 | orchestrator | Thursday 27 March 2025 01:11:24 +0000 (0:00:04.577) 0:01:11.812 ******** 2025-03-27 01:13:56.156428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156449 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.156458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156491 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.156501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156595 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.156605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156631 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.156649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156669 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.156707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156729 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.156738 | orchestrator | 2025-03-27 01:13:56.156748 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-03-27 01:13:56.156757 | orchestrator | Thursday 27 March 2025 01:11:27 +0000 (0:00:03.566) 0:01:15.379 ******** 2025-03-27 01:13:56.156772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156841 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.156850 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.156860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.156878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156887 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.156897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156924 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.156955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.156981 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.156990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157017 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.157026 | orchestrator | 2025-03-27 01:13:56.157035 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-03-27 01:13:56.157048 | orchestrator | Thursday 27 March 2025 01:11:34 +0000 (0:00:06.530) 0:01:21.910 ******** 2025-03-27 01:13:56.157058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157445 | orchestrator | 2025-03-27 01:13:56.157454 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-03-27 01:13:56.157464 | orchestrator | Thursday 27 March 2025 01:11:39 +0000 (0:00:05.726) 0:01:27.637 ******** 2025-03-27 01:13:56.157473 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-27 01:13:56.157482 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.157492 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-27 01:13:56.157501 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.157517 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-03-27 01:13:56.157526 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.157548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-27 01:13:56.157558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-27 01:13:56.157567 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-03-27 01:13:56.157576 | orchestrator | 2025-03-27 01:13:56.157586 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-03-27 01:13:56.157595 | orchestrator | Thursday 27 March 2025 01:11:45 +0000 (0:00:05.623) 0:01:33.260 ******** 2025-03-27 01:13:56.157604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.157671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.157726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.157874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.157914 | orchestrator | 2025-03-27 01:13:56.157926 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-03-27 01:13:56.157936 | orchestrator | Thursday 27 March 2025 01:12:01 +0000 (0:00:16.435) 0:01:49.696 ******** 2025-03-27 01:13:56.157945 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.157955 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.157964 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.157973 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:56.157982 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:56.157991 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:56.158001 | orchestrator | 2025-03-27 01:13:56.158010 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-03-27 01:13:56.158044 | orchestrator | Thursday 27 March 2025 01:12:05 +0000 (0:00:03.397) 0:01:53.093 ******** 2025-03-27 01:13:56.158055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158156 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.158165 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.158175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158224 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.158234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158478 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.158488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158600 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.158609 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.158619 | orchestrator | 2025-03-27 01:13:56.158628 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-03-27 01:13:56.158637 | orchestrator | Thursday 27 March 2025 01:12:07 +0000 (0:00:02.440) 0:01:55.534 ******** 2025-03-27 01:13:56.158646 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.158655 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.158664 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.158673 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.158682 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.158691 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.158700 | orchestrator | 2025-03-27 01:13:56.158710 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-03-27 01:13:56.158719 | orchestrator | Thursday 27 March 2025 01:12:09 +0000 (0:00:01.751) 0:01:57.285 ******** 2025-03-27 01:13:56.158732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.158766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.158776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-27 01:13:56.158791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-27 01:13:56.158834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-27 01:13:56.158959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.158997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-03-27 01:13:56.159006 | orchestrator | 2025-03-27 01:13:56.159016 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-03-27 01:13:56.159025 | orchestrator | Thursday 27 March 2025 01:12:14 +0000 (0:00:04.828) 0:02:02.114 ******** 2025-03-27 01:13:56.159034 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.159043 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:13:56.159053 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:13:56.159062 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:13:56.159071 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:13:56.159080 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:13:56.159088 | orchestrator | 2025-03-27 01:13:56.159098 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-03-27 01:13:56.159107 | orchestrator | Thursday 27 March 2025 01:12:15 +0000 (0:00:00.755) 0:02:02.870 ******** 2025-03-27 01:13:56.159116 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:56.159125 | orchestrator | 2025-03-27 01:13:56.159134 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-03-27 01:13:56.159144 | orchestrator | Thursday 27 March 2025 01:12:18 +0000 (0:00:03.390) 0:02:06.261 ******** 2025-03-27 01:13:56.159153 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:56.159162 | orchestrator | 2025-03-27 01:13:56.159171 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-03-27 01:13:56.159181 | orchestrator | Thursday 27 March 2025 01:12:21 +0000 (0:00:02.898) 0:02:09.159 ******** 2025-03-27 01:13:56.159190 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:56.159199 | orchestrator | 2025-03-27 01:13:56.159208 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159217 | orchestrator | Thursday 27 March 2025 01:12:42 +0000 (0:00:21.224) 0:02:30.384 ******** 2025-03-27 01:13:56.159226 | orchestrator | 2025-03-27 01:13:56.159235 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159244 | orchestrator | Thursday 27 March 2025 01:12:42 +0000 (0:00:00.137) 0:02:30.521 ******** 2025-03-27 01:13:56.159254 | orchestrator | 2025-03-27 01:13:56.159263 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159272 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:00.593) 0:02:31.114 ******** 2025-03-27 01:13:56.159281 | orchestrator | 2025-03-27 01:13:56.159290 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159299 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:00.095) 0:02:31.210 ******** 2025-03-27 01:13:56.159308 | orchestrator | 2025-03-27 01:13:56.159317 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159326 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:00.085) 0:02:31.296 ******** 2025-03-27 01:13:56.159335 | orchestrator | 2025-03-27 01:13:56.159345 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-03-27 01:13:56.159354 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:00.084) 0:02:31.380 ******** 2025-03-27 01:13:56.159363 | orchestrator | 2025-03-27 01:13:56.159372 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-03-27 01:13:56.159381 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:00.285) 0:02:31.666 ******** 2025-03-27 01:13:56.159394 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:56.159404 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:56.159413 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:56.159422 | orchestrator | 2025-03-27 01:13:56.159431 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-03-27 01:13:56.159441 | orchestrator | Thursday 27 March 2025 01:13:03 +0000 (0:00:19.176) 0:02:50.842 ******** 2025-03-27 01:13:56.159450 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:13:56.159459 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:13:56.159468 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:13:56.159477 | orchestrator | 2025-03-27 01:13:56.159486 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-03-27 01:13:56.159499 | orchestrator | Thursday 27 March 2025 01:13:14 +0000 (0:00:11.681) 0:03:02.524 ******** 2025-03-27 01:13:56.159898 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:56.159917 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:56.159927 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:56.159936 | orchestrator | 2025-03-27 01:13:56.159946 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-03-27 01:13:56.159955 | orchestrator | Thursday 27 March 2025 01:13:40 +0000 (0:00:26.051) 0:03:28.575 ******** 2025-03-27 01:13:56.159964 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:13:56.159973 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:13:56.159982 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:13:56.159991 | orchestrator | 2025-03-27 01:13:56.160001 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-03-27 01:13:56.160010 | orchestrator | Thursday 27 March 2025 01:13:54 +0000 (0:00:13.544) 0:03:42.120 ******** 2025-03-27 01:13:56.160019 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:13:56.160029 | orchestrator | 2025-03-27 01:13:56.160038 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:13:56.160047 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-03-27 01:13:56.160057 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-27 01:13:56.160066 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-27 01:13:56.160076 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:13:56.160085 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:13:56.160094 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:13:56.160103 | orchestrator | 2025-03-27 01:13:56.160113 | orchestrator | 2025-03-27 01:13:56.160122 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:13:56.160131 | orchestrator | Thursday 27 March 2025 01:13:55 +0000 (0:00:00.720) 0:03:42.840 ******** 2025-03-27 01:13:56.160140 | orchestrator | =============================================================================== 2025-03-27 01:13:56.160149 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.05s 2025-03-27 01:13:56.160158 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.22s 2025-03-27 01:13:56.160167 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.18s 2025-03-27 01:13:56.160176 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.44s 2025-03-27 01:13:56.160185 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.54s 2025-03-27 01:13:56.160202 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.68s 2025-03-27 01:13:56.160211 | orchestrator | service-ks-register : cinder | Granting user roles --------------------- 11.10s 2025-03-27 01:13:56.160220 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.26s 2025-03-27 01:13:56.160229 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 6.53s 2025-03-27 01:13:56.160238 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.73s 2025-03-27 01:13:56.160248 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 5.62s 2025-03-27 01:13:56.160257 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 5.30s 2025-03-27 01:13:56.160266 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.83s 2025-03-27 01:13:56.160275 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.66s 2025-03-27 01:13:56.160283 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.64s 2025-03-27 01:13:56.160293 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.58s 2025-03-27 01:13:56.160307 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.39s 2025-03-27 01:13:56.160316 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.36s 2025-03-27 01:13:56.160326 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.80s 2025-03-27 01:13:56.160335 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.71s 2025-03-27 01:13:56.160344 | orchestrator | 2025-03-27 01:13:56 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:56.160353 | orchestrator | 2025-03-27 01:13:56 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:56.160366 | orchestrator | 2025-03-27 01:13:56 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:56.162104 | orchestrator | 2025-03-27 01:13:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:13:59.202625 | orchestrator | 2025-03-27 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:13:59.202873 | orchestrator | 2025-03-27 01:13:59 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:13:59.204135 | orchestrator | 2025-03-27 01:13:59 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:13:59.204168 | orchestrator | 2025-03-27 01:13:59 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:13:59.205420 | orchestrator | 2025-03-27 01:13:59 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:13:59.208264 | orchestrator | 2025-03-27 01:13:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:02.239052 | orchestrator | 2025-03-27 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:02.239196 | orchestrator | 2025-03-27 01:14:02 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:14:02.240736 | orchestrator | 2025-03-27 01:14:02 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:02.244071 | orchestrator | 2025-03-27 01:14:02 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:02.245363 | orchestrator | 2025-03-27 01:14:02 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:02.247166 | orchestrator | 2025-03-27 01:14:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:05.297913 | orchestrator | 2025-03-27 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:05.298114 | orchestrator | 2025-03-27 01:14:05 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:14:05.299251 | orchestrator | 2025-03-27 01:14:05 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:05.300233 | orchestrator | 2025-03-27 01:14:05 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:05.301408 | orchestrator | 2025-03-27 01:14:05 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:05.302677 | orchestrator | 2025-03-27 01:14:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:08.353582 | orchestrator | 2025-03-27 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:08.353683 | orchestrator | 2025-03-27 01:14:08 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state STARTED 2025-03-27 01:14:08.355325 | orchestrator | 2025-03-27 01:14:08 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:08.357901 | orchestrator | 2025-03-27 01:14:08 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:08.359384 | orchestrator | 2025-03-27 01:14:08 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:08.360996 | orchestrator | 2025-03-27 01:14:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:11.416189 | orchestrator | 2025-03-27 01:14:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:11.416322 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task d12e1ab4-ebed-489e-87cf-4beed71ce915 is in state SUCCESS 2025-03-27 01:14:11.417133 | orchestrator | 2025-03-27 01:14:11.417174 | orchestrator | 2025-03-27 01:14:11.417251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:14:11.417321 | orchestrator | 2025-03-27 01:14:11.417336 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:14:11.417350 | orchestrator | Thursday 27 March 2025 01:09:55 +0000 (0:00:00.339) 0:00:00.339 ******** 2025-03-27 01:14:11.417365 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:14:11.417380 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:14:11.417394 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:14:11.417408 | orchestrator | 2025-03-27 01:14:11.417422 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:14:11.417436 | orchestrator | Thursday 27 March 2025 01:09:55 +0000 (0:00:00.437) 0:00:00.776 ******** 2025-03-27 01:14:11.417450 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-03-27 01:14:11.417464 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-03-27 01:14:11.417478 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-03-27 01:14:11.417492 | orchestrator | 2025-03-27 01:14:11.417506 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-03-27 01:14:11.417519 | orchestrator | 2025-03-27 01:14:11.417802 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-27 01:14:11.417825 | orchestrator | Thursday 27 March 2025 01:09:55 +0000 (0:00:00.395) 0:00:01.172 ******** 2025-03-27 01:14:11.417839 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:14:11.417854 | orchestrator | 2025-03-27 01:14:11.417869 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-03-27 01:14:11.417883 | orchestrator | Thursday 27 March 2025 01:09:56 +0000 (0:00:00.774) 0:00:01.946 ******** 2025-03-27 01:14:11.417896 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-03-27 01:14:11.417910 | orchestrator | 2025-03-27 01:14:11.417924 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-03-27 01:14:11.417938 | orchestrator | Thursday 27 March 2025 01:10:00 +0000 (0:00:03.921) 0:00:05.868 ******** 2025-03-27 01:14:11.417973 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-03-27 01:14:11.417988 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-03-27 01:14:11.418002 | orchestrator | 2025-03-27 01:14:11.418065 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-03-27 01:14:11.418084 | orchestrator | Thursday 27 March 2025 01:10:08 +0000 (0:00:07.984) 0:00:13.853 ******** 2025-03-27 01:14:11.418098 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:14:11.418112 | orchestrator | 2025-03-27 01:14:11.418126 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-03-27 01:14:11.418140 | orchestrator | Thursday 27 March 2025 01:10:12 +0000 (0:00:03.881) 0:00:17.734 ******** 2025-03-27 01:14:11.418154 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:14:11.418167 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-03-27 01:14:11.418181 | orchestrator | 2025-03-27 01:14:11.418195 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-03-27 01:14:11.418209 | orchestrator | Thursday 27 March 2025 01:10:16 +0000 (0:00:04.297) 0:00:22.032 ******** 2025-03-27 01:14:11.418222 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:14:11.418237 | orchestrator | 2025-03-27 01:14:11.418250 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-03-27 01:14:11.418264 | orchestrator | Thursday 27 March 2025 01:10:20 +0000 (0:00:03.917) 0:00:25.949 ******** 2025-03-27 01:14:11.418278 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-03-27 01:14:11.418292 | orchestrator | 2025-03-27 01:14:11.418305 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-03-27 01:14:11.418319 | orchestrator | Thursday 27 March 2025 01:10:26 +0000 (0:00:05.815) 0:00:31.765 ******** 2025-03-27 01:14:11.418351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.418372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.418398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.418427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.418452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.418479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.418503 | orchestrator | 2025-03-27 01:14:11.418519 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-27 01:14:11.418561 | orchestrator | Thursday 27 March 2025 01:10:35 +0000 (0:00:09.300) 0:00:41.065 ******** 2025-03-27 01:14:11.418578 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:14:11.418594 | orchestrator | 2025-03-27 01:14:11.418695 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-03-27 01:14:11.418712 | orchestrator | Thursday 27 March 2025 01:10:36 +0000 (0:00:00.715) 0:00:41.781 ******** 2025-03-27 01:14:11.418726 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.418740 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:14:11.418753 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:14:11.418767 | orchestrator | 2025-03-27 01:14:11.418781 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-03-27 01:14:11.418795 | orchestrator | Thursday 27 March 2025 01:10:48 +0000 (0:00:12.282) 0:00:54.063 ******** 2025-03-27 01:14:11.418808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418850 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418865 | orchestrator | 2025-03-27 01:14:11.418883 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-03-27 01:14:11.418897 | orchestrator | Thursday 27 March 2025 01:10:52 +0000 (0:00:03.650) 0:00:57.714 ******** 2025-03-27 01:14:11.418911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-03-27 01:14:11.418952 | orchestrator | 2025-03-27 01:14:11.418966 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-03-27 01:14:11.418980 | orchestrator | Thursday 27 March 2025 01:10:55 +0000 (0:00:02.877) 0:01:00.592 ******** 2025-03-27 01:14:11.418994 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:14:11.419013 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:14:11.419027 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:14:11.419041 | orchestrator | 2025-03-27 01:14:11.419055 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-03-27 01:14:11.419069 | orchestrator | Thursday 27 March 2025 01:10:56 +0000 (0:00:01.150) 0:01:01.742 ******** 2025-03-27 01:14:11.419083 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.419102 | orchestrator | 2025-03-27 01:14:11.419116 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-03-27 01:14:11.419129 | orchestrator | Thursday 27 March 2025 01:10:56 +0000 (0:00:00.224) 0:01:01.966 ******** 2025-03-27 01:14:11.419143 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.419166 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.419180 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.419194 | orchestrator | 2025-03-27 01:14:11.419208 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-27 01:14:11.419222 | orchestrator | Thursday 27 March 2025 01:10:56 +0000 (0:00:00.309) 0:01:02.276 ******** 2025-03-27 01:14:11.419236 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:14:11.419249 | orchestrator | 2025-03-27 01:14:11.419263 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-03-27 01:14:11.419277 | orchestrator | Thursday 27 March 2025 01:10:58 +0000 (0:00:01.282) 0:01:03.559 ******** 2025-03-27 01:14:11.419301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419372 | orchestrator | 2025-03-27 01:14:11.419387 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-03-27 01:14:11.419403 | orchestrator | Thursday 27 March 2025 01:11:04 +0000 (0:00:06.515) 0:01:10.075 ******** 2025-03-27 01:14:11.419419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419437 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.419460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419484 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.419501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419518 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.419554 | orchestrator | 2025-03-27 01:14:11.419570 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-03-27 01:14:11.419587 | orchestrator | Thursday 27 March 2025 01:11:10 +0000 (0:00:05.567) 0:01:15.642 ******** 2025-03-27 01:14:11.419611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419636 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.419652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419669 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.419685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-27 01:14:11.419706 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.419720 | orchestrator | 2025-03-27 01:14:11.419734 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-03-27 01:14:11.419748 | orchestrator | Thursday 27 March 2025 01:11:16 +0000 (0:00:06.147) 0:01:21.790 ******** 2025-03-27 01:14:11.419761 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.419775 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.419789 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.419802 | orchestrator | 2025-03-27 01:14:11.419821 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-03-27 01:14:11.419835 | orchestrator | Thursday 27 March 2025 01:11:22 +0000 (0:00:06.371) 0:01:28.161 ******** 2025-03-27 01:14:11.419849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.419908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.419958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.419974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.419996 | orchestrator | 2025-03-27 01:14:11.420011 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-03-27 01:14:11.420025 | orchestrator | Thursday 27 March 2025 01:11:33 +0000 (0:00:10.584) 0:01:38.745 ******** 2025-03-27 01:14:11.420039 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:14:11.420053 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:14:11.420067 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.420080 | orchestrator | 2025-03-27 01:14:11.420094 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-03-27 01:14:11.420108 | orchestrator | Thursday 27 March 2025 01:11:56 +0000 (0:00:23.051) 0:02:01.797 ******** 2025-03-27 01:14:11.420121 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420135 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420149 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420162 | orchestrator | 2025-03-27 01:14:11.420176 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-03-27 01:14:11.420190 | orchestrator | Thursday 27 March 2025 01:12:08 +0000 (0:00:11.822) 0:02:13.626 ******** 2025-03-27 01:14:11.420203 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420217 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420230 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420244 | orchestrator | 2025-03-27 01:14:11.420258 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-03-27 01:14:11.420271 | orchestrator | Thursday 27 March 2025 01:12:19 +0000 (0:00:11.465) 0:02:25.092 ******** 2025-03-27 01:14:11.420285 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420298 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420312 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420326 | orchestrator | 2025-03-27 01:14:11.420340 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-03-27 01:14:11.420354 | orchestrator | Thursday 27 March 2025 01:12:30 +0000 (0:00:10.719) 0:02:35.812 ******** 2025-03-27 01:14:11.420367 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420386 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420400 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420414 | orchestrator | 2025-03-27 01:14:11.420428 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-03-27 01:14:11.420441 | orchestrator | Thursday 27 March 2025 01:12:38 +0000 (0:00:08.289) 0:02:44.101 ******** 2025-03-27 01:14:11.420455 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420468 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420482 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420496 | orchestrator | 2025-03-27 01:14:11.420514 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-03-27 01:14:11.420529 | orchestrator | Thursday 27 March 2025 01:12:39 +0000 (0:00:00.505) 0:02:44.607 ******** 2025-03-27 01:14:11.420572 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-27 01:14:11.420586 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420601 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-27 01:14:11.420614 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.420628 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-03-27 01:14:11.420642 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.420656 | orchestrator | 2025-03-27 01:14:11.420677 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-03-27 01:14:11.420691 | orchestrator | Thursday 27 March 2025 01:12:43 +0000 (0:00:04.664) 0:02:49.272 ******** 2025-03-27 01:14:11.420705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.420728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.420744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.420771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.420788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-27 01:14:11.420810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-27 01:14:11.420825 | orchestrator | 2025-03-27 01:14:11.420839 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-03-27 01:14:11.420853 | orchestrator | Thursday 27 March 2025 01:12:51 +0000 (0:00:07.274) 0:02:56.546 ******** 2025-03-27 01:14:11.420867 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:14:11.420880 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:14:11.421233 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:14:11.421258 | orchestrator | 2025-03-27 01:14:11.421284 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-03-27 01:14:11.421300 | orchestrator | Thursday 27 March 2025 01:12:51 +0000 (0:00:00.455) 0:02:57.001 ******** 2025-03-27 01:14:11.421314 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421329 | orchestrator | 2025-03-27 01:14:11.421344 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-03-27 01:14:11.421359 | orchestrator | Thursday 27 March 2025 01:12:54 +0000 (0:00:02.330) 0:02:59.332 ******** 2025-03-27 01:14:11.421373 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421400 | orchestrator | 2025-03-27 01:14:11.421415 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-03-27 01:14:11.421428 | orchestrator | Thursday 27 March 2025 01:12:56 +0000 (0:00:02.614) 0:03:01.946 ******** 2025-03-27 01:14:11.421442 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421456 | orchestrator | 2025-03-27 01:14:11.421470 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-03-27 01:14:11.421483 | orchestrator | Thursday 27 March 2025 01:12:59 +0000 (0:00:02.527) 0:03:04.474 ******** 2025-03-27 01:14:11.421497 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421511 | orchestrator | 2025-03-27 01:14:11.421524 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-03-27 01:14:11.421561 | orchestrator | Thursday 27 March 2025 01:13:28 +0000 (0:00:29.211) 0:03:33.685 ******** 2025-03-27 01:14:11.421575 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421590 | orchestrator | 2025-03-27 01:14:11.421603 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-27 01:14:11.421617 | orchestrator | Thursday 27 March 2025 01:13:30 +0000 (0:00:02.443) 0:03:36.129 ******** 2025-03-27 01:14:11.421631 | orchestrator | 2025-03-27 01:14:11.421645 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-27 01:14:11.421658 | orchestrator | Thursday 27 March 2025 01:13:30 +0000 (0:00:00.063) 0:03:36.192 ******** 2025-03-27 01:14:11.421672 | orchestrator | 2025-03-27 01:14:11.421686 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-03-27 01:14:11.421699 | orchestrator | Thursday 27 March 2025 01:13:30 +0000 (0:00:00.063) 0:03:36.256 ******** 2025-03-27 01:14:11.421713 | orchestrator | 2025-03-27 01:14:11.421749 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-03-27 01:14:11.421764 | orchestrator | Thursday 27 March 2025 01:13:31 +0000 (0:00:00.221) 0:03:36.477 ******** 2025-03-27 01:14:11.421778 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:14:11.421792 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:14:11.421806 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:14:11.421819 | orchestrator | 2025-03-27 01:14:11.421833 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:14:11.421850 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-03-27 01:14:11.421868 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-03-27 01:14:11.421884 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-03-27 01:14:11.421899 | orchestrator | 2025-03-27 01:14:11.421914 | orchestrator | 2025-03-27 01:14:11.421930 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:14:11.421945 | orchestrator | Thursday 27 March 2025 01:14:08 +0000 (0:00:37.762) 0:04:14.240 ******** 2025-03-27 01:14:11.421960 | orchestrator | =============================================================================== 2025-03-27 01:14:11.421976 | orchestrator | glance : Restart glance-api container ---------------------------------- 37.76s 2025-03-27 01:14:11.421992 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.21s 2025-03-27 01:14:11.422008 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 23.06s 2025-03-27 01:14:11.422075 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 12.28s 2025-03-27 01:14:11.422092 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 11.82s 2025-03-27 01:14:11.422107 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 11.47s 2025-03-27 01:14:11.422123 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 10.72s 2025-03-27 01:14:11.422139 | orchestrator | glance : Copying over config.json files for services ------------------- 10.58s 2025-03-27 01:14:11.422164 | orchestrator | glance : Ensuring config directories exist ------------------------------ 9.30s 2025-03-27 01:14:11.422180 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.29s 2025-03-27 01:14:11.422195 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.98s 2025-03-27 01:14:11.422209 | orchestrator | glance : Check glance containers ---------------------------------------- 7.27s 2025-03-27 01:14:11.422222 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.52s 2025-03-27 01:14:11.422236 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.37s 2025-03-27 01:14:11.422250 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.15s 2025-03-27 01:14:11.422263 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.82s 2025-03-27 01:14:11.422277 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.57s 2025-03-27 01:14:11.422291 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.66s 2025-03-27 01:14:11.422305 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.30s 2025-03-27 01:14:11.422325 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.92s 2025-03-27 01:14:11.422782 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:11.422805 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:11.422818 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:11.422835 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:11.424880 | orchestrator | 2025-03-27 01:14:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:14.482443 | orchestrator | 2025-03-27 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:14.482591 | orchestrator | 2025-03-27 01:14:14 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:14.486226 | orchestrator | 2025-03-27 01:14:14 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:14.488451 | orchestrator | 2025-03-27 01:14:14 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:14.490864 | orchestrator | 2025-03-27 01:14:14 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:14.492691 | orchestrator | 2025-03-27 01:14:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:17.543105 | orchestrator | 2025-03-27 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:17.543231 | orchestrator | 2025-03-27 01:14:17 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:17.543646 | orchestrator | 2025-03-27 01:14:17 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:17.544760 | orchestrator | 2025-03-27 01:14:17 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:17.545634 | orchestrator | 2025-03-27 01:14:17 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:17.546694 | orchestrator | 2025-03-27 01:14:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:20.606093 | orchestrator | 2025-03-27 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:20.606231 | orchestrator | 2025-03-27 01:14:20 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:20.606995 | orchestrator | 2025-03-27 01:14:20 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:20.608405 | orchestrator | 2025-03-27 01:14:20 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:20.609375 | orchestrator | 2025-03-27 01:14:20 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:20.610490 | orchestrator | 2025-03-27 01:14:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:23.658856 | orchestrator | 2025-03-27 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:23.658984 | orchestrator | 2025-03-27 01:14:23 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:23.659558 | orchestrator | 2025-03-27 01:14:23 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:23.661226 | orchestrator | 2025-03-27 01:14:23 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:23.662636 | orchestrator | 2025-03-27 01:14:23 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:23.663964 | orchestrator | 2025-03-27 01:14:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:26.718400 | orchestrator | 2025-03-27 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:26.718519 | orchestrator | 2025-03-27 01:14:26 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:26.718931 | orchestrator | 2025-03-27 01:14:26 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:26.719777 | orchestrator | 2025-03-27 01:14:26 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:26.720627 | orchestrator | 2025-03-27 01:14:26 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:26.724245 | orchestrator | 2025-03-27 01:14:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:29.780013 | orchestrator | 2025-03-27 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:29.780153 | orchestrator | 2025-03-27 01:14:29 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:29.781590 | orchestrator | 2025-03-27 01:14:29 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:29.783480 | orchestrator | 2025-03-27 01:14:29 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:29.785815 | orchestrator | 2025-03-27 01:14:29 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:29.787979 | orchestrator | 2025-03-27 01:14:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:32.838211 | orchestrator | 2025-03-27 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:32.838383 | orchestrator | 2025-03-27 01:14:32 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:32.843747 | orchestrator | 2025-03-27 01:14:32 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:32.844424 | orchestrator | 2025-03-27 01:14:32 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:32.845735 | orchestrator | 2025-03-27 01:14:32 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:32.846631 | orchestrator | 2025-03-27 01:14:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:35.895021 | orchestrator | 2025-03-27 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:35.895151 | orchestrator | 2025-03-27 01:14:35 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:35.897704 | orchestrator | 2025-03-27 01:14:35 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:35.900452 | orchestrator | 2025-03-27 01:14:35 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:35.902141 | orchestrator | 2025-03-27 01:14:35 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:35.903280 | orchestrator | 2025-03-27 01:14:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:38.961393 | orchestrator | 2025-03-27 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:38.961520 | orchestrator | 2025-03-27 01:14:38 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:38.964036 | orchestrator | 2025-03-27 01:14:38 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:38.964090 | orchestrator | 2025-03-27 01:14:38 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:38.970068 | orchestrator | 2025-03-27 01:14:38 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:38.973696 | orchestrator | 2025-03-27 01:14:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:42.037646 | orchestrator | 2025-03-27 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:42.037783 | orchestrator | 2025-03-27 01:14:42 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:42.039338 | orchestrator | 2025-03-27 01:14:42 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:42.040966 | orchestrator | 2025-03-27 01:14:42 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:42.042215 | orchestrator | 2025-03-27 01:14:42 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:42.043764 | orchestrator | 2025-03-27 01:14:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:45.093953 | orchestrator | 2025-03-27 01:14:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:45.094140 | orchestrator | 2025-03-27 01:14:45 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:45.095774 | orchestrator | 2025-03-27 01:14:45 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:45.097265 | orchestrator | 2025-03-27 01:14:45 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:45.098255 | orchestrator | 2025-03-27 01:14:45 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:45.099333 | orchestrator | 2025-03-27 01:14:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:45.099436 | orchestrator | 2025-03-27 01:14:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:48.151973 | orchestrator | 2025-03-27 01:14:48 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:48.153503 | orchestrator | 2025-03-27 01:14:48 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:48.155875 | orchestrator | 2025-03-27 01:14:48 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:48.157447 | orchestrator | 2025-03-27 01:14:48 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:48.158823 | orchestrator | 2025-03-27 01:14:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:48.158926 | orchestrator | 2025-03-27 01:14:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:51.209953 | orchestrator | 2025-03-27 01:14:51 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:51.213118 | orchestrator | 2025-03-27 01:14:51 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:51.215897 | orchestrator | 2025-03-27 01:14:51 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:51.217875 | orchestrator | 2025-03-27 01:14:51 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:51.219460 | orchestrator | 2025-03-27 01:14:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:54.265977 | orchestrator | 2025-03-27 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:54.266167 | orchestrator | 2025-03-27 01:14:54 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:54.269577 | orchestrator | 2025-03-27 01:14:54 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:54.269736 | orchestrator | 2025-03-27 01:14:54 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:54.270473 | orchestrator | 2025-03-27 01:14:54 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:54.272739 | orchestrator | 2025-03-27 01:14:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:57.317748 | orchestrator | 2025-03-27 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:14:57.317889 | orchestrator | 2025-03-27 01:14:57 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:14:57.320097 | orchestrator | 2025-03-27 01:14:57 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:14:57.320141 | orchestrator | 2025-03-27 01:14:57 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:14:57.320737 | orchestrator | 2025-03-27 01:14:57 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:14:57.322818 | orchestrator | 2025-03-27 01:14:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:14:57.323664 | orchestrator | 2025-03-27 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:00.371344 | orchestrator | 2025-03-27 01:15:00 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:00.372176 | orchestrator | 2025-03-27 01:15:00 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:00.373503 | orchestrator | 2025-03-27 01:15:00 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state STARTED 2025-03-27 01:15:00.374557 | orchestrator | 2025-03-27 01:15:00 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:00.375516 | orchestrator | 2025-03-27 01:15:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:03.421074 | orchestrator | 2025-03-27 01:15:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:03.421213 | orchestrator | 2025-03-27 01:15:03 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:03.422147 | orchestrator | 2025-03-27 01:15:03 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:03.422681 | orchestrator | 2025-03-27 01:15:03.422715 | orchestrator | 2025-03-27 01:15:03.422730 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:15:03.422744 | orchestrator | 2025-03-27 01:15:03.422758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:15:03.422772 | orchestrator | Thursday 27 March 2025 01:13:59 +0000 (0:00:00.435) 0:00:00.435 ******** 2025-03-27 01:15:03.422786 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:15:03.422802 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:15:03.422815 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:15:03.422829 | orchestrator | 2025-03-27 01:15:03.422843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:15:03.422857 | orchestrator | Thursday 27 March 2025 01:13:59 +0000 (0:00:00.495) 0:00:00.931 ******** 2025-03-27 01:15:03.422871 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-03-27 01:15:03.422885 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-03-27 01:15:03.422899 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-03-27 01:15:03.422913 | orchestrator | 2025-03-27 01:15:03.422927 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-03-27 01:15:03.422940 | orchestrator | 2025-03-27 01:15:03.422953 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-03-27 01:15:03.422968 | orchestrator | Thursday 27 March 2025 01:14:00 +0000 (0:00:00.386) 0:00:01.317 ******** 2025-03-27 01:15:03.422982 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:15:03.422997 | orchestrator | 2025-03-27 01:15:03.423011 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-03-27 01:15:03.423025 | orchestrator | Thursday 27 March 2025 01:14:01 +0000 (0:00:00.895) 0:00:02.212 ******** 2025-03-27 01:15:03.423039 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-03-27 01:15:03.423052 | orchestrator | 2025-03-27 01:15:03.423066 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-03-27 01:15:03.423079 | orchestrator | Thursday 27 March 2025 01:14:04 +0000 (0:00:03.886) 0:00:06.099 ******** 2025-03-27 01:15:03.423093 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-03-27 01:15:03.423107 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-03-27 01:15:03.423121 | orchestrator | 2025-03-27 01:15:03.423134 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-03-27 01:15:03.423148 | orchestrator | Thursday 27 March 2025 01:14:12 +0000 (0:00:07.262) 0:00:13.361 ******** 2025-03-27 01:15:03.423162 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:15:03.423176 | orchestrator | 2025-03-27 01:15:03.423189 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-03-27 01:15:03.423218 | orchestrator | Thursday 27 March 2025 01:14:15 +0000 (0:00:03.717) 0:00:17.079 ******** 2025-03-27 01:15:03.423233 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:15:03.423247 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-03-27 01:15:03.423260 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-03-27 01:15:03.423274 | orchestrator | 2025-03-27 01:15:03.423288 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-03-27 01:15:03.423301 | orchestrator | Thursday 27 March 2025 01:14:25 +0000 (0:00:09.105) 0:00:26.184 ******** 2025-03-27 01:15:03.423314 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:15:03.423328 | orchestrator | 2025-03-27 01:15:03.423341 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-03-27 01:15:03.423355 | orchestrator | Thursday 27 March 2025 01:14:28 +0000 (0:00:03.849) 0:00:30.034 ******** 2025-03-27 01:15:03.423368 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-03-27 01:15:03.423395 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-03-27 01:15:03.423408 | orchestrator | 2025-03-27 01:15:03.423422 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-03-27 01:15:03.423436 | orchestrator | Thursday 27 March 2025 01:14:37 +0000 (0:00:09.113) 0:00:39.147 ******** 2025-03-27 01:15:03.423449 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-03-27 01:15:03.423463 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-03-27 01:15:03.423476 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-03-27 01:15:03.423490 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-03-27 01:15:03.423503 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-03-27 01:15:03.423517 | orchestrator | 2025-03-27 01:15:03.423531 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-03-27 01:15:03.423565 | orchestrator | Thursday 27 March 2025 01:14:56 +0000 (0:00:18.385) 0:00:57.533 ******** 2025-03-27 01:15:03.423580 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:15:03.423593 | orchestrator | 2025-03-27 01:15:03.423607 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-03-27 01:15:03.423621 | orchestrator | Thursday 27 March 2025 01:14:57 +0000 (0:00:00.882) 0:00:58.415 ******** 2025-03-27 01:15:03.423635 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-03-27 01:15:03.423783 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1743038098.8417928-6713-155695525295721/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1743038098.8417928-6713-155695525295721/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1743038098.8417928-6713-155695525295721/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_nova_flavor_payload_5ngy7phw/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_5ngy7phw/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_5ngy7phw/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_5ngy7phw/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-03-27 01:15:03.423822 | orchestrator | 2025-03-27 01:15:03.423837 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:15:03.423851 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-03-27 01:15:03.423866 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:15:03.423880 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:15:03.423894 | orchestrator | 2025-03-27 01:15:03.423907 | orchestrator | 2025-03-27 01:15:03.423921 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:15:03.423934 | orchestrator | Thursday 27 March 2025 01:15:00 +0000 (0:00:03.650) 0:01:02.066 ******** 2025-03-27 01:15:03.423948 | orchestrator | =============================================================================== 2025-03-27 01:15:03.423962 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.39s 2025-03-27 01:15:03.423987 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 9.11s 2025-03-27 01:15:03.426609 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.11s 2025-03-27 01:15:03.426659 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.26s 2025-03-27 01:15:03.426673 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.89s 2025-03-27 01:15:03.426686 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.85s 2025-03-27 01:15:03.426698 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.72s 2025-03-27 01:15:03.426710 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.65s 2025-03-27 01:15:03.426723 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.90s 2025-03-27 01:15:03.426735 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.88s 2025-03-27 01:15:03.426747 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-03-27 01:15:03.426759 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-03-27 01:15:03.426772 | orchestrator | 2025-03-27 01:15:03 | INFO  | Task 55dc2fde-7777-4b7c-b776-896debd23044 is in state SUCCESS 2025-03-27 01:15:03.426785 | orchestrator | 2025-03-27 01:15:03 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:03.426806 | orchestrator | 2025-03-27 01:15:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:03.427580 | orchestrator | 2025-03-27 01:15:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:06.473613 | orchestrator | 2025-03-27 01:15:06 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:06.474444 | orchestrator | 2025-03-27 01:15:06 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:06.474497 | orchestrator | 2025-03-27 01:15:06 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:06.474732 | orchestrator | 2025-03-27 01:15:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:09.517673 | orchestrator | 2025-03-27 01:15:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:09.517822 | orchestrator | 2025-03-27 01:15:09 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:09.518779 | orchestrator | 2025-03-27 01:15:09 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:09.519912 | orchestrator | 2025-03-27 01:15:09 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:09.522318 | orchestrator | 2025-03-27 01:15:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:12.559621 | orchestrator | 2025-03-27 01:15:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:12.559753 | orchestrator | 2025-03-27 01:15:12 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:12.561164 | orchestrator | 2025-03-27 01:15:12 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:12.563480 | orchestrator | 2025-03-27 01:15:12 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:12.565219 | orchestrator | 2025-03-27 01:15:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:15.609437 | orchestrator | 2025-03-27 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:15.609615 | orchestrator | 2025-03-27 01:15:15 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:15.615228 | orchestrator | 2025-03-27 01:15:15 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:15.618167 | orchestrator | 2025-03-27 01:15:15 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:15.620775 | orchestrator | 2025-03-27 01:15:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:18.670179 | orchestrator | 2025-03-27 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:18.670309 | orchestrator | 2025-03-27 01:15:18 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:18.672833 | orchestrator | 2025-03-27 01:15:18 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:18.677362 | orchestrator | 2025-03-27 01:15:18 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:18.679855 | orchestrator | 2025-03-27 01:15:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:18.680484 | orchestrator | 2025-03-27 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:21.742523 | orchestrator | 2025-03-27 01:15:21 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:21.745887 | orchestrator | 2025-03-27 01:15:21 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:21.747369 | orchestrator | 2025-03-27 01:15:21 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:21.749452 | orchestrator | 2025-03-27 01:15:21 | INFO  | Task 2deb0085-cbdb-42ce-a23a-5edb55f4bfff is in state STARTED 2025-03-27 01:15:21.751573 | orchestrator | 2025-03-27 01:15:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:21.751694 | orchestrator | 2025-03-27 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:24.817640 | orchestrator | 2025-03-27 01:15:24 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:24.819399 | orchestrator | 2025-03-27 01:15:24 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:24.823806 | orchestrator | 2025-03-27 01:15:24 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:24.826177 | orchestrator | 2025-03-27 01:15:24 | INFO  | Task 2deb0085-cbdb-42ce-a23a-5edb55f4bfff is in state STARTED 2025-03-27 01:15:24.828198 | orchestrator | 2025-03-27 01:15:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:27.887859 | orchestrator | 2025-03-27 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:27.887995 | orchestrator | 2025-03-27 01:15:27 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:27.890093 | orchestrator | 2025-03-27 01:15:27 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:27.890134 | orchestrator | 2025-03-27 01:15:27 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:27.891325 | orchestrator | 2025-03-27 01:15:27 | INFO  | Task 2deb0085-cbdb-42ce-a23a-5edb55f4bfff is in state STARTED 2025-03-27 01:15:30.944722 | orchestrator | 2025-03-27 01:15:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:30.944843 | orchestrator | 2025-03-27 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:30.944878 | orchestrator | 2025-03-27 01:15:30 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:30.945564 | orchestrator | 2025-03-27 01:15:30 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:30.946712 | orchestrator | 2025-03-27 01:15:30 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:30.947019 | orchestrator | 2025-03-27 01:15:30 | INFO  | Task 2deb0085-cbdb-42ce-a23a-5edb55f4bfff is in state SUCCESS 2025-03-27 01:15:30.949645 | orchestrator | 2025-03-27 01:15:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:34.007075 | orchestrator | 2025-03-27 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:34.007220 | orchestrator | 2025-03-27 01:15:34 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:34.008916 | orchestrator | 2025-03-27 01:15:34 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:34.010129 | orchestrator | 2025-03-27 01:15:34 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:34.011926 | orchestrator | 2025-03-27 01:15:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:37.069944 | orchestrator | 2025-03-27 01:15:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:37.070222 | orchestrator | 2025-03-27 01:15:37 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:37.071349 | orchestrator | 2025-03-27 01:15:37 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:37.071384 | orchestrator | 2025-03-27 01:15:37 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:37.072063 | orchestrator | 2025-03-27 01:15:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:40.121908 | orchestrator | 2025-03-27 01:15:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:40.122107 | orchestrator | 2025-03-27 01:15:40 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:40.123465 | orchestrator | 2025-03-27 01:15:40 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:40.127271 | orchestrator | 2025-03-27 01:15:40 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:40.129374 | orchestrator | 2025-03-27 01:15:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:40.129596 | orchestrator | 2025-03-27 01:15:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:43.198773 | orchestrator | 2025-03-27 01:15:43 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:43.200653 | orchestrator | 2025-03-27 01:15:43 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:43.202945 | orchestrator | 2025-03-27 01:15:43 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:43.204934 | orchestrator | 2025-03-27 01:15:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:43.205193 | orchestrator | 2025-03-27 01:15:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:46.324430 | orchestrator | 2025-03-27 01:15:46 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:46.325182 | orchestrator | 2025-03-27 01:15:46 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:46.325225 | orchestrator | 2025-03-27 01:15:46 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:46.326189 | orchestrator | 2025-03-27 01:15:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:49.370383 | orchestrator | 2025-03-27 01:15:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:49.370518 | orchestrator | 2025-03-27 01:15:49 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:49.370860 | orchestrator | 2025-03-27 01:15:49 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:49.372876 | orchestrator | 2025-03-27 01:15:49 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:49.374177 | orchestrator | 2025-03-27 01:15:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:52.422865 | orchestrator | 2025-03-27 01:15:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:52.422999 | orchestrator | 2025-03-27 01:15:52 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:52.425225 | orchestrator | 2025-03-27 01:15:52 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:52.429008 | orchestrator | 2025-03-27 01:15:52 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:52.430949 | orchestrator | 2025-03-27 01:15:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:52.431230 | orchestrator | 2025-03-27 01:15:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:55.491314 | orchestrator | 2025-03-27 01:15:55 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:55.493472 | orchestrator | 2025-03-27 01:15:55 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:55.499051 | orchestrator | 2025-03-27 01:15:55 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:55.500788 | orchestrator | 2025-03-27 01:15:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:15:55.501020 | orchestrator | 2025-03-27 01:15:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:15:58.548223 | orchestrator | 2025-03-27 01:15:58 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:15:58.549750 | orchestrator | 2025-03-27 01:15:58 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:15:58.551696 | orchestrator | 2025-03-27 01:15:58 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:15:58.552976 | orchestrator | 2025-03-27 01:15:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:01.609850 | orchestrator | 2025-03-27 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:01.609981 | orchestrator | 2025-03-27 01:16:01 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:01.611257 | orchestrator | 2025-03-27 01:16:01 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:16:01.612813 | orchestrator | 2025-03-27 01:16:01 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:01.615252 | orchestrator | 2025-03-27 01:16:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:04.669053 | orchestrator | 2025-03-27 01:16:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:04.669812 | orchestrator | 2025-03-27 01:16:04 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:04.670446 | orchestrator | 2025-03-27 01:16:04 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:16:04.672242 | orchestrator | 2025-03-27 01:16:04 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:04.673469 | orchestrator | 2025-03-27 01:16:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:07.726807 | orchestrator | 2025-03-27 01:16:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:07.726891 | orchestrator | 2025-03-27 01:16:07 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:07.728239 | orchestrator | 2025-03-27 01:16:07 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state STARTED 2025-03-27 01:16:07.730155 | orchestrator | 2025-03-27 01:16:07 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:07.731902 | orchestrator | 2025-03-27 01:16:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:07.732143 | orchestrator | 2025-03-27 01:16:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:10.784887 | orchestrator | 2025-03-27 01:16:10 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:10.790068 | orchestrator | 2025-03-27 01:16:10.790097 | orchestrator | None 2025-03-27 01:16:10.790104 | orchestrator | 2025-03-27 01:16:10.790110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:16:10.790116 | orchestrator | 2025-03-27 01:16:10.790121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:16:10.790127 | orchestrator | Thursday 27 March 2025 01:14:12 +0000 (0:00:00.331) 0:00:00.331 ******** 2025-03-27 01:16:10.790132 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:16:10.790138 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:16:10.790157 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:16:10.790162 | orchestrator | 2025-03-27 01:16:10.790168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:16:10.790173 | orchestrator | Thursday 27 March 2025 01:14:12 +0000 (0:00:00.425) 0:00:00.756 ******** 2025-03-27 01:16:10.790178 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-03-27 01:16:10.790192 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-03-27 01:16:10.790198 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-03-27 01:16:10.790203 | orchestrator | 2025-03-27 01:16:10.790208 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-03-27 01:16:10.790213 | orchestrator | 2025-03-27 01:16:10.790217 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-03-27 01:16:10.790223 | orchestrator | Thursday 27 March 2025 01:14:13 +0000 (0:00:00.340) 0:00:01.096 ******** 2025-03-27 01:16:10.790228 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:16:10.790234 | orchestrator | 2025-03-27 01:16:10.790239 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-03-27 01:16:10.790244 | orchestrator | Thursday 27 March 2025 01:14:14 +0000 (0:00:00.750) 0:00:01.847 ******** 2025-03-27 01:16:10.790251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790273 | orchestrator | 2025-03-27 01:16:10.790279 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-03-27 01:16:10.790284 | orchestrator | Thursday 27 March 2025 01:14:15 +0000 (0:00:01.050) 0:00:02.898 ******** 2025-03-27 01:16:10.790289 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-03-27 01:16:10.790295 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-03-27 01:16:10.790301 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:16:10.790306 | orchestrator | 2025-03-27 01:16:10.790312 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-03-27 01:16:10.790321 | orchestrator | Thursday 27 March 2025 01:14:15 +0000 (0:00:00.512) 0:00:03.410 ******** 2025-03-27 01:16:10.790327 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:16:10.790332 | orchestrator | 2025-03-27 01:16:10.790337 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-03-27 01:16:10.790343 | orchestrator | Thursday 27 March 2025 01:14:16 +0000 (0:00:00.680) 0:00:04.091 ******** 2025-03-27 01:16:10.790354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790372 | orchestrator | 2025-03-27 01:16:10.790377 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-03-27 01:16:10.790382 | orchestrator | Thursday 27 March 2025 01:14:17 +0000 (0:00:01.630) 0:00:05.722 ******** 2025-03-27 01:16:10.790388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790402 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.790412 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.790422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790428 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.790433 | orchestrator | 2025-03-27 01:16:10.790438 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-03-27 01:16:10.790443 | orchestrator | Thursday 27 March 2025 01:14:18 +0000 (0:00:00.624) 0:00:06.347 ******** 2025-03-27 01:16:10.790449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790454 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.790459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790464 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.790470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-27 01:16:10.790475 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.790480 | orchestrator | 2025-03-27 01:16:10.790485 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-03-27 01:16:10.790490 | orchestrator | Thursday 27 March 2025 01:14:19 +0000 (0:00:00.746) 0:00:07.094 ******** 2025-03-27 01:16:10.790499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790518 | orchestrator | 2025-03-27 01:16:10.790524 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-03-27 01:16:10.790529 | orchestrator | Thursday 27 March 2025 01:14:20 +0000 (0:00:01.609) 0:00:08.703 ******** 2025-03-27 01:16:10.790534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.790582 | orchestrator | 2025-03-27 01:16:10.790587 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-03-27 01:16:10.790592 | orchestrator | Thursday 27 March 2025 01:14:22 +0000 (0:00:01.790) 0:00:10.493 ******** 2025-03-27 01:16:10.790597 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.790601 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.790606 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.790611 | orchestrator | 2025-03-27 01:16:10.790616 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-03-27 01:16:10.790620 | orchestrator | Thursday 27 March 2025 01:14:22 +0000 (0:00:00.307) 0:00:10.801 ******** 2025-03-27 01:16:10.790625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-27 01:16:10.790630 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-27 01:16:10.790636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-03-27 01:16:10.790640 | orchestrator | 2025-03-27 01:16:10.790645 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-03-27 01:16:10.790650 | orchestrator | Thursday 27 March 2025 01:14:24 +0000 (0:00:01.513) 0:00:12.314 ******** 2025-03-27 01:16:10.790654 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-27 01:16:10.790662 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-27 01:16:10.790668 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-03-27 01:16:10.790673 | orchestrator | 2025-03-27 01:16:10.790679 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-03-27 01:16:10.790684 | orchestrator | Thursday 27 March 2025 01:14:26 +0000 (0:00:01.564) 0:00:13.878 ******** 2025-03-27 01:16:10.790690 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:16:10.790695 | orchestrator | 2025-03-27 01:16:10.790700 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-03-27 01:16:10.790706 | orchestrator | Thursday 27 March 2025 01:14:26 +0000 (0:00:00.453) 0:00:14.332 ******** 2025-03-27 01:16:10.790711 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-03-27 01:16:10.790717 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-03-27 01:16:10.790722 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:16:10.790727 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:16:10.790733 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:16:10.790738 | orchestrator | 2025-03-27 01:16:10.790743 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-03-27 01:16:10.790751 | orchestrator | Thursday 27 March 2025 01:14:27 +0000 (0:00:01.032) 0:00:15.364 ******** 2025-03-27 01:16:10.790757 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.790762 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.790768 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.790773 | orchestrator | 2025-03-27 01:16:10.790779 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-03-27 01:16:10.790784 | orchestrator | Thursday 27 March 2025 01:14:28 +0000 (0:00:00.543) 0:00:15.908 ******** 2025-03-27 01:16:10.790790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329197, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6616426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329197, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6616426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1329197, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6616426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329188, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329188, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1329188, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329178, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329178, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1329178, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329193, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329193, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1329193, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329163, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6426413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329163, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6426413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1329163, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6426413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329181, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329181, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1329181, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6466415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329192, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329192, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1329192, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6496418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329160, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6416411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329160, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6416411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1329160, 'dev': 168, 'nlink': 1, 'atime': 1737057118.02025-03-27 01:16:10 | INFO  | Task 91f1930b-b147-4770-80fb-6dd9af6ee047 is in state SUCCESS 2025-03-27 01:16:10.791208 | orchestrator | , 'mtime': 1737057118.0, 'ctime': 1743034535.6416411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329124, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6306405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329124, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6306405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1329124, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6306405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329167, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6436412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329167, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6436412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1329167, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6436412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329132, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6336408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329132, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6336408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1329132, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6336408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329189, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6486416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329189, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6486416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1329189, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6486416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329171, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6446414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329171, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6446414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1329171, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6446414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329195, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.650642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329195, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.650642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1329195, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.650642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329159, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6406412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329159, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6406412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1329159, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6406412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329184, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329184, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1329184, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6476417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329131, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6326406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329131, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6326406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1329131, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6326406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329135, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6346407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329135, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6346407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1329135, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6346407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329176, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6456416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329176, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6456416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1329176, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6456416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329280, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6886444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329280, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6886444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329273, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6786437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1329280, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6886444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329273, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6786437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1329273, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6786437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329349, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7026453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329349, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7026453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1329349, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7026453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329214, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6626427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329214, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6626427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1329214, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6626427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1329356, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7046454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1329356, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7046454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1329356, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7046454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329314, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6896443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329314, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6896443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329318, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6906445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1329314, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6896443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329318, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6906445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329217, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6646428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1329318, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6906445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329217, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6646428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329275, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6796436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1329217, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6646428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329275, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6796436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1329359, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7066455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1329275, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6796436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1329359, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7066455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329326, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6926446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1329359, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7066455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329326, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6926446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329223, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1329326, 'dev': 168, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743034535.6926446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329223, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329220, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6656427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1329223, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.668643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329220, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6656427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329237, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.670643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1329220, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6656427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329237, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.670643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329244, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6776435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1329237, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.670643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329244, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6776435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1329374, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7216465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1329244, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.6776435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1329374, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7216465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1329374, 'dev': 168, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743034535.7216465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-03-27 01:16:10.791884 | orchestrator | 2025-03-27 01:16:10.791889 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-03-27 01:16:10.791896 | orchestrator | Thursday 27 March 2025 01:15:05 +0000 (0:00:36.959) 0:00:52.867 ******** 2025-03-27 01:16:10.791902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.791907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.791913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-27 01:16:10.791918 | orchestrator | 2025-03-27 01:16:10.791924 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-03-27 01:16:10.791932 | orchestrator | Thursday 27 March 2025 01:15:06 +0000 (0:00:01.226) 0:00:54.094 ******** 2025-03-27 01:16:10.791937 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:16:10.791942 | orchestrator | 2025-03-27 01:16:10.791947 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-03-27 01:16:10.791952 | orchestrator | Thursday 27 March 2025 01:15:09 +0000 (0:00:02.950) 0:00:57.044 ******** 2025-03-27 01:16:10.791957 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:16:10.791962 | orchestrator | 2025-03-27 01:16:10.791967 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-27 01:16:10.791972 | orchestrator | Thursday 27 March 2025 01:15:11 +0000 (0:00:02.629) 0:00:59.674 ******** 2025-03-27 01:16:10.791977 | orchestrator | 2025-03-27 01:16:10.791982 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-27 01:16:10.791987 | orchestrator | Thursday 27 March 2025 01:15:11 +0000 (0:00:00.062) 0:00:59.736 ******** 2025-03-27 01:16:10.791992 | orchestrator | 2025-03-27 01:16:10.791997 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-03-27 01:16:10.792002 | orchestrator | Thursday 27 March 2025 01:15:11 +0000 (0:00:00.063) 0:00:59.799 ******** 2025-03-27 01:16:10.792007 | orchestrator | 2025-03-27 01:16:10.792012 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-03-27 01:16:10.792017 | orchestrator | Thursday 27 March 2025 01:15:12 +0000 (0:00:00.212) 0:01:00.012 ******** 2025-03-27 01:16:10.792022 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.792027 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.792032 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:16:10.792037 | orchestrator | 2025-03-27 01:16:10.792042 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-03-27 01:16:10.792047 | orchestrator | Thursday 27 March 2025 01:15:14 +0000 (0:00:02.043) 0:01:02.056 ******** 2025-03-27 01:16:10.792052 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.792057 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.792062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-03-27 01:16:10.792067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-03-27 01:16:10.792073 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:16:10.792078 | orchestrator | 2025-03-27 01:16:10.792083 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-03-27 01:16:10.792088 | orchestrator | Thursday 27 March 2025 01:15:41 +0000 (0:00:27.734) 0:01:29.790 ******** 2025-03-27 01:16:10.792093 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.792098 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:16:10.792103 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:16:10.792108 | orchestrator | 2025-03-27 01:16:10.792116 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-03-27 01:16:10.793146 | orchestrator | Thursday 27 March 2025 01:16:00 +0000 (0:00:18.877) 0:01:48.668 ******** 2025-03-27 01:16:10.793155 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:16:10.793160 | orchestrator | 2025-03-27 01:16:10.793165 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-03-27 01:16:10.793170 | orchestrator | Thursday 27 March 2025 01:16:03 +0000 (0:00:02.608) 0:01:51.276 ******** 2025-03-27 01:16:10.793175 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.793179 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:16:10.793184 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:16:10.793189 | orchestrator | 2025-03-27 01:16:10.793194 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-03-27 01:16:10.793199 | orchestrator | Thursday 27 March 2025 01:16:03 +0000 (0:00:00.430) 0:01:51.706 ******** 2025-03-27 01:16:10.793204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-03-27 01:16:10.793216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-03-27 01:16:10.793221 | orchestrator | 2025-03-27 01:16:10.793226 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-03-27 01:16:10.793231 | orchestrator | Thursday 27 March 2025 01:16:06 +0000 (0:00:03.134) 0:01:54.841 ******** 2025-03-27 01:16:10.793235 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:16:10.793240 | orchestrator | 2025-03-27 01:16:10.793245 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:16:10.793250 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:16:10.793255 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:16:10.793260 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-03-27 01:16:10.793265 | orchestrator | 2025-03-27 01:16:10.793270 | orchestrator | 2025-03-27 01:16:10.793274 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:16:10.793279 | orchestrator | Thursday 27 March 2025 01:16:07 +0000 (0:00:00.411) 0:01:55.252 ******** 2025-03-27 01:16:10.793289 | orchestrator | =============================================================================== 2025-03-27 01:16:10.793294 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.96s 2025-03-27 01:16:10.793299 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.73s 2025-03-27 01:16:10.793304 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.88s 2025-03-27 01:16:10.793309 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 3.13s 2025-03-27 01:16:10.793313 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.95s 2025-03-27 01:16:10.793318 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.63s 2025-03-27 01:16:10.793323 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.61s 2025-03-27 01:16:10.793328 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.04s 2025-03-27 01:16:10.793332 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.79s 2025-03-27 01:16:10.793337 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.63s 2025-03-27 01:16:10.793342 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.61s 2025-03-27 01:16:10.793347 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.56s 2025-03-27 01:16:10.793351 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.51s 2025-03-27 01:16:10.793356 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.23s 2025-03-27 01:16:10.793361 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.05s 2025-03-27 01:16:10.793366 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.03s 2025-03-27 01:16:10.793370 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-03-27 01:16:10.793375 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.75s 2025-03-27 01:16:10.793380 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2025-03-27 01:16:10.793385 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.62s 2025-03-27 01:16:10.793393 | orchestrator | 2025-03-27 01:16:10 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:10.793399 | orchestrator | 2025-03-27 01:16:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:13.855889 | orchestrator | 2025-03-27 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:13.856025 | orchestrator | 2025-03-27 01:16:13 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:13.858958 | orchestrator | 2025-03-27 01:16:13 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:13.861805 | orchestrator | 2025-03-27 01:16:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:16.920299 | orchestrator | 2025-03-27 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:16.920425 | orchestrator | 2025-03-27 01:16:16 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:16.922597 | orchestrator | 2025-03-27 01:16:16 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:16.925148 | orchestrator | 2025-03-27 01:16:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:19.971486 | orchestrator | 2025-03-27 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:19.971673 | orchestrator | 2025-03-27 01:16:19 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state STARTED 2025-03-27 01:16:19.971829 | orchestrator | 2025-03-27 01:16:19 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:19.973603 | orchestrator | 2025-03-27 01:16:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:23.024815 | orchestrator | 2025-03-27 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:23.024935 | orchestrator | 2025-03-27 01:16:23 | INFO  | Task c66324d1-0842-4788-b4cf-a060caac17c4 is in state SUCCESS 2025-03-27 01:16:23.026305 | orchestrator | 2025-03-27 01:16:23 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:23.031682 | orchestrator | 2025-03-27 01:16:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:23.031834 | orchestrator | 2025-03-27 01:16:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:26.098526 | orchestrator | 2025-03-27 01:16:26 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:26.100082 | orchestrator | 2025-03-27 01:16:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:29.153989 | orchestrator | 2025-03-27 01:16:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:29.154159 | orchestrator | 2025-03-27 01:16:29 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:29.155421 | orchestrator | 2025-03-27 01:16:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:32.206476 | orchestrator | 2025-03-27 01:16:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:32.206634 | orchestrator | 2025-03-27 01:16:32 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:35.246065 | orchestrator | 2025-03-27 01:16:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:35.246197 | orchestrator | 2025-03-27 01:16:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:35.246234 | orchestrator | 2025-03-27 01:16:35 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:35.248029 | orchestrator | 2025-03-27 01:16:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:38.292361 | orchestrator | 2025-03-27 01:16:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:38.292490 | orchestrator | 2025-03-27 01:16:38 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:41.333740 | orchestrator | 2025-03-27 01:16:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:41.333860 | orchestrator | 2025-03-27 01:16:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:41.333894 | orchestrator | 2025-03-27 01:16:41 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:41.335921 | orchestrator | 2025-03-27 01:16:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:44.381292 | orchestrator | 2025-03-27 01:16:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:44.381438 | orchestrator | 2025-03-27 01:16:44 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:44.381646 | orchestrator | 2025-03-27 01:16:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:44.381683 | orchestrator | 2025-03-27 01:16:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:47.441676 | orchestrator | 2025-03-27 01:16:47 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:50.503091 | orchestrator | 2025-03-27 01:16:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:50.503252 | orchestrator | 2025-03-27 01:16:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:50.503291 | orchestrator | 2025-03-27 01:16:50 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:50.506269 | orchestrator | 2025-03-27 01:16:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:53.563103 | orchestrator | 2025-03-27 01:16:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:53.563235 | orchestrator | 2025-03-27 01:16:53 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:56.617284 | orchestrator | 2025-03-27 01:16:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:56.617405 | orchestrator | 2025-03-27 01:16:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:56.617442 | orchestrator | 2025-03-27 01:16:56 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:56.621017 | orchestrator | 2025-03-27 01:16:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:16:59.682080 | orchestrator | 2025-03-27 01:16:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:16:59.682221 | orchestrator | 2025-03-27 01:16:59 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:16:59.685873 | orchestrator | 2025-03-27 01:16:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:02.745634 | orchestrator | 2025-03-27 01:16:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:02.745761 | orchestrator | 2025-03-27 01:17:02 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:02.748703 | orchestrator | 2025-03-27 01:17:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:05.796896 | orchestrator | 2025-03-27 01:17:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:05.797051 | orchestrator | 2025-03-27 01:17:05 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:05.798650 | orchestrator | 2025-03-27 01:17:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:08.854344 | orchestrator | 2025-03-27 01:17:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:08.854475 | orchestrator | 2025-03-27 01:17:08 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:08.856650 | orchestrator | 2025-03-27 01:17:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:11.905227 | orchestrator | 2025-03-27 01:17:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:11.905370 | orchestrator | 2025-03-27 01:17:11 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:11.905819 | orchestrator | 2025-03-27 01:17:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:14.961372 | orchestrator | 2025-03-27 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:14.961472 | orchestrator | 2025-03-27 01:17:14 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:14.965320 | orchestrator | 2025-03-27 01:17:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:18.012479 | orchestrator | 2025-03-27 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:18.012646 | orchestrator | 2025-03-27 01:17:18 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:18.013746 | orchestrator | 2025-03-27 01:17:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:21.059063 | orchestrator | 2025-03-27 01:17:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:21.059189 | orchestrator | 2025-03-27 01:17:21 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:21.059468 | orchestrator | 2025-03-27 01:17:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:24.108393 | orchestrator | 2025-03-27 01:17:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:24.108531 | orchestrator | 2025-03-27 01:17:24 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:24.109711 | orchestrator | 2025-03-27 01:17:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:27.158218 | orchestrator | 2025-03-27 01:17:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:27.158369 | orchestrator | 2025-03-27 01:17:27 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:27.159475 | orchestrator | 2025-03-27 01:17:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:27.162925 | orchestrator | 2025-03-27 01:17:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:30.216400 | orchestrator | 2025-03-27 01:17:30 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:30.217971 | orchestrator | 2025-03-27 01:17:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:30.218385 | orchestrator | 2025-03-27 01:17:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:33.263598 | orchestrator | 2025-03-27 01:17:33 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:33.265733 | orchestrator | 2025-03-27 01:17:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:36.319361 | orchestrator | 2025-03-27 01:17:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:36.319500 | orchestrator | 2025-03-27 01:17:36 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:36.320654 | orchestrator | 2025-03-27 01:17:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:39.365679 | orchestrator | 2025-03-27 01:17:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:39.365821 | orchestrator | 2025-03-27 01:17:39 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:42.406311 | orchestrator | 2025-03-27 01:17:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:42.406428 | orchestrator | 2025-03-27 01:17:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:42.406463 | orchestrator | 2025-03-27 01:17:42 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:42.409657 | orchestrator | 2025-03-27 01:17:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:45.453667 | orchestrator | 2025-03-27 01:17:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:45.453807 | orchestrator | 2025-03-27 01:17:45 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:45.457161 | orchestrator | 2025-03-27 01:17:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:45.457319 | orchestrator | 2025-03-27 01:17:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:48.510834 | orchestrator | 2025-03-27 01:17:48 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:48.512624 | orchestrator | 2025-03-27 01:17:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:48.513408 | orchestrator | 2025-03-27 01:17:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:51.568997 | orchestrator | 2025-03-27 01:17:51 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:51.570312 | orchestrator | 2025-03-27 01:17:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:54.620025 | orchestrator | 2025-03-27 01:17:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:54.620157 | orchestrator | 2025-03-27 01:17:54 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:54.621028 | orchestrator | 2025-03-27 01:17:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:57.697913 | orchestrator | 2025-03-27 01:17:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:17:57.698104 | orchestrator | 2025-03-27 01:17:57 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:17:57.698609 | orchestrator | 2025-03-27 01:17:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:17:57.698898 | orchestrator | 2025-03-27 01:17:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:00.750321 | orchestrator | 2025-03-27 01:18:00 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:03.799731 | orchestrator | 2025-03-27 01:18:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:03.799853 | orchestrator | 2025-03-27 01:18:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:03.799893 | orchestrator | 2025-03-27 01:18:03 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:03.801386 | orchestrator | 2025-03-27 01:18:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:06.870003 | orchestrator | 2025-03-27 01:18:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:06.870157 | orchestrator | 2025-03-27 01:18:06 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:06.871673 | orchestrator | 2025-03-27 01:18:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:06.871697 | orchestrator | 2025-03-27 01:18:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:09.916414 | orchestrator | 2025-03-27 01:18:09 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:09.918486 | orchestrator | 2025-03-27 01:18:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:12.960682 | orchestrator | 2025-03-27 01:18:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:12.960818 | orchestrator | 2025-03-27 01:18:12 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:12.962151 | orchestrator | 2025-03-27 01:18:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:16.024901 | orchestrator | 2025-03-27 01:18:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:16.025029 | orchestrator | 2025-03-27 01:18:16 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:16.028401 | orchestrator | 2025-03-27 01:18:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:19.081858 | orchestrator | 2025-03-27 01:18:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:19.081999 | orchestrator | 2025-03-27 01:18:19 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:19.082691 | orchestrator | 2025-03-27 01:18:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:22.131912 | orchestrator | 2025-03-27 01:18:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:22.132077 | orchestrator | 2025-03-27 01:18:22 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:22.133071 | orchestrator | 2025-03-27 01:18:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:25.187707 | orchestrator | 2025-03-27 01:18:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:25.187842 | orchestrator | 2025-03-27 01:18:25 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:25.189602 | orchestrator | 2025-03-27 01:18:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:28.256890 | orchestrator | 2025-03-27 01:18:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:28.257031 | orchestrator | 2025-03-27 01:18:28 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:31.317507 | orchestrator | 2025-03-27 01:18:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:31.317667 | orchestrator | 2025-03-27 01:18:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:31.317703 | orchestrator | 2025-03-27 01:18:31 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:31.318758 | orchestrator | 2025-03-27 01:18:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:31.319156 | orchestrator | 2025-03-27 01:18:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:34.372069 | orchestrator | 2025-03-27 01:18:34 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:34.373518 | orchestrator | 2025-03-27 01:18:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:37.426485 | orchestrator | 2025-03-27 01:18:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:37.426657 | orchestrator | 2025-03-27 01:18:37 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:40.472701 | orchestrator | 2025-03-27 01:18:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:40.472825 | orchestrator | 2025-03-27 01:18:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:40.472864 | orchestrator | 2025-03-27 01:18:40 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:40.474192 | orchestrator | 2025-03-27 01:18:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:43.521015 | orchestrator | 2025-03-27 01:18:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:43.521157 | orchestrator | 2025-03-27 01:18:43 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:43.522542 | orchestrator | 2025-03-27 01:18:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:46.569451 | orchestrator | 2025-03-27 01:18:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:46.569632 | orchestrator | 2025-03-27 01:18:46 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:46.570667 | orchestrator | 2025-03-27 01:18:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:49.634227 | orchestrator | 2025-03-27 01:18:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:49.634366 | orchestrator | 2025-03-27 01:18:49 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:52.681249 | orchestrator | 2025-03-27 01:18:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:52.681363 | orchestrator | 2025-03-27 01:18:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:52.681399 | orchestrator | 2025-03-27 01:18:52 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:52.683216 | orchestrator | 2025-03-27 01:18:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:52.683493 | orchestrator | 2025-03-27 01:18:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:55.746288 | orchestrator | 2025-03-27 01:18:55 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:55.748344 | orchestrator | 2025-03-27 01:18:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:18:58.796375 | orchestrator | 2025-03-27 01:18:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:18:58.796512 | orchestrator | 2025-03-27 01:18:58 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:18:58.797435 | orchestrator | 2025-03-27 01:18:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:01.852021 | orchestrator | 2025-03-27 01:18:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:01.852161 | orchestrator | 2025-03-27 01:19:01 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:01.853340 | orchestrator | 2025-03-27 01:19:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:01.853652 | orchestrator | 2025-03-27 01:19:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:04.906416 | orchestrator | 2025-03-27 01:19:04 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:04.908527 | orchestrator | 2025-03-27 01:19:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:07.956395 | orchestrator | 2025-03-27 01:19:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:07.956540 | orchestrator | 2025-03-27 01:19:07 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:07.957493 | orchestrator | 2025-03-27 01:19:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:11.017537 | orchestrator | 2025-03-27 01:19:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:11.017678 | orchestrator | 2025-03-27 01:19:11 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:11.020180 | orchestrator | 2025-03-27 01:19:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:11.021015 | orchestrator | 2025-03-27 01:19:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:14.061932 | orchestrator | 2025-03-27 01:19:14 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:14.062385 | orchestrator | 2025-03-27 01:19:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:17.121112 | orchestrator | 2025-03-27 01:19:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:17.121237 | orchestrator | 2025-03-27 01:19:17 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:17.124584 | orchestrator | 2025-03-27 01:19:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:20.179121 | orchestrator | 2025-03-27 01:19:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:20.179266 | orchestrator | 2025-03-27 01:19:20 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:20.181315 | orchestrator | 2025-03-27 01:19:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:23.230836 | orchestrator | 2025-03-27 01:19:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:23.230965 | orchestrator | 2025-03-27 01:19:23 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:23.232133 | orchestrator | 2025-03-27 01:19:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:26.284377 | orchestrator | 2025-03-27 01:19:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:26.284510 | orchestrator | 2025-03-27 01:19:26 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:26.286834 | orchestrator | 2025-03-27 01:19:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:29.340346 | orchestrator | 2025-03-27 01:19:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:29.340477 | orchestrator | 2025-03-27 01:19:29 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:32.382782 | orchestrator | 2025-03-27 01:19:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:32.382911 | orchestrator | 2025-03-27 01:19:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:32.382948 | orchestrator | 2025-03-27 01:19:32 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:32.384207 | orchestrator | 2025-03-27 01:19:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:35.433774 | orchestrator | 2025-03-27 01:19:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:35.433916 | orchestrator | 2025-03-27 01:19:35 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:35.434992 | orchestrator | 2025-03-27 01:19:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:38.482798 | orchestrator | 2025-03-27 01:19:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:38.482938 | orchestrator | 2025-03-27 01:19:38 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:38.484962 | orchestrator | 2025-03-27 01:19:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:41.534190 | orchestrator | 2025-03-27 01:19:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:41.534313 | orchestrator | 2025-03-27 01:19:41 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:41.535813 | orchestrator | 2025-03-27 01:19:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:41.536409 | orchestrator | 2025-03-27 01:19:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:44.579595 | orchestrator | 2025-03-27 01:19:44 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:44.581473 | orchestrator | 2025-03-27 01:19:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:44.581512 | orchestrator | 2025-03-27 01:19:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:47.626804 | orchestrator | 2025-03-27 01:19:47 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:47.628238 | orchestrator | 2025-03-27 01:19:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:50.690155 | orchestrator | 2025-03-27 01:19:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:50.690299 | orchestrator | 2025-03-27 01:19:50 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:50.692401 | orchestrator | 2025-03-27 01:19:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:53.746655 | orchestrator | 2025-03-27 01:19:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:53.746787 | orchestrator | 2025-03-27 01:19:53 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:53.749320 | orchestrator | 2025-03-27 01:19:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:56.798485 | orchestrator | 2025-03-27 01:19:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:56.798671 | orchestrator | 2025-03-27 01:19:56 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:19:59.840102 | orchestrator | 2025-03-27 01:19:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:19:59.840221 | orchestrator | 2025-03-27 01:19:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:19:59.840258 | orchestrator | 2025-03-27 01:19:59 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:02.880186 | orchestrator | 2025-03-27 01:19:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:02.880313 | orchestrator | 2025-03-27 01:19:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:02.880353 | orchestrator | 2025-03-27 01:20:02 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:02.880800 | orchestrator | 2025-03-27 01:20:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:05.920516 | orchestrator | 2025-03-27 01:20:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:05.920684 | orchestrator | 2025-03-27 01:20:05 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:05.921177 | orchestrator | 2025-03-27 01:20:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:08.981170 | orchestrator | 2025-03-27 01:20:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:08.981315 | orchestrator | 2025-03-27 01:20:08 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:08.984162 | orchestrator | 2025-03-27 01:20:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:08.984736 | orchestrator | 2025-03-27 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:12.033244 | orchestrator | 2025-03-27 01:20:12 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:15.097881 | orchestrator | 2025-03-27 01:20:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:15.098005 | orchestrator | 2025-03-27 01:20:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:15.098117 | orchestrator | 2025-03-27 01:20:15 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:15.100300 | orchestrator | 2025-03-27 01:20:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:18.160016 | orchestrator | 2025-03-27 01:20:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:18.160148 | orchestrator | 2025-03-27 01:20:18 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:18.161462 | orchestrator | 2025-03-27 01:20:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:21.214813 | orchestrator | 2025-03-27 01:20:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:21.214952 | orchestrator | 2025-03-27 01:20:21 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:21.216977 | orchestrator | 2025-03-27 01:20:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:21.217285 | orchestrator | 2025-03-27 01:20:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:24.268103 | orchestrator | 2025-03-27 01:20:24 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:24.269854 | orchestrator | 2025-03-27 01:20:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:27.334967 | orchestrator | 2025-03-27 01:20:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:27.335100 | orchestrator | 2025-03-27 01:20:27 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:27.336613 | orchestrator | 2025-03-27 01:20:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:30.391045 | orchestrator | 2025-03-27 01:20:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:30.391177 | orchestrator | 2025-03-27 01:20:30 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:30.393258 | orchestrator | 2025-03-27 01:20:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:33.448398 | orchestrator | 2025-03-27 01:20:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:33.448617 | orchestrator | 2025-03-27 01:20:33 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:33.450684 | orchestrator | 2025-03-27 01:20:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:36.504509 | orchestrator | 2025-03-27 01:20:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:36.504670 | orchestrator | 2025-03-27 01:20:36 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:36.506976 | orchestrator | 2025-03-27 01:20:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:39.570171 | orchestrator | 2025-03-27 01:20:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:39.570311 | orchestrator | 2025-03-27 01:20:39 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:39.571771 | orchestrator | 2025-03-27 01:20:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:42.626346 | orchestrator | 2025-03-27 01:20:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:42.626470 | orchestrator | 2025-03-27 01:20:42 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:42.631851 | orchestrator | 2025-03-27 01:20:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:45.689954 | orchestrator | 2025-03-27 01:20:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:45.690148 | orchestrator | 2025-03-27 01:20:45 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state STARTED 2025-03-27 01:20:45.692368 | orchestrator | 2025-03-27 01:20:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:48.763531 | orchestrator | 2025-03-27 01:20:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:48.763647 | orchestrator | 2025-03-27 01:20:48 | INFO  | Task 550c88c2-8c9e-4d79-8891-ebb620d57edb is in state SUCCESS 2025-03-27 01:20:48.766106 | orchestrator | 2025-03-27 01:20:48.766207 | orchestrator | 2025-03-27 01:20:48.766227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:20:48.766242 | orchestrator | 2025-03-27 01:20:48.766342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:20:48.766359 | orchestrator | Thursday 27 March 2025 01:13:34 +0000 (0:00:00.297) 0:00:00.297 ******** 2025-03-27 01:20:48.766373 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.766388 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:20:48.766402 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:20:48.766416 | orchestrator | 2025-03-27 01:20:48.766430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:20:48.766444 | orchestrator | Thursday 27 March 2025 01:13:34 +0000 (0:00:00.622) 0:00:00.920 ******** 2025-03-27 01:20:48.766458 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-03-27 01:20:48.766472 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-03-27 01:20:48.766486 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-03-27 01:20:48.766499 | orchestrator | 2025-03-27 01:20:48.766513 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-03-27 01:20:48.766527 | orchestrator | 2025-03-27 01:20:48.766540 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-03-27 01:20:48.766554 | orchestrator | Thursday 27 March 2025 01:13:35 +0000 (0:00:00.634) 0:00:01.555 ******** 2025-03-27 01:20:48.766605 | orchestrator | 2025-03-27 01:20:48.766631 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-03-27 01:20:48.766655 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.766671 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:20:48.766711 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:20:48.766727 | orchestrator | 2025-03-27 01:20:48.766743 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:20:48.766760 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:20:48.766777 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:20:48.766793 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:20:48.766809 | orchestrator | 2025-03-27 01:20:48.766824 | orchestrator | 2025-03-27 01:20:48.766851 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:20:48.766867 | orchestrator | Thursday 27 March 2025 01:16:19 +0000 (0:02:44.396) 0:02:45.951 ******** 2025-03-27 01:20:48.766883 | orchestrator | =============================================================================== 2025-03-27 01:20:48.766899 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 164.40s 2025-03-27 01:20:48.766915 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-03-27 01:20:48.766931 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-03-27 01:20:48.766946 | orchestrator | 2025-03-27 01:20:48.766960 | orchestrator | 2025-03-27 01:20:48.766974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-27 01:20:48.766988 | orchestrator | 2025-03-27 01:20:48.767001 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-03-27 01:20:48.767015 | orchestrator | Thursday 27 March 2025 01:11:48 +0000 (0:00:00.873) 0:00:00.873 ******** 2025-03-27 01:20:48.767029 | orchestrator | changed: [testbed-manager] 2025-03-27 01:20:48.767043 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767057 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.767071 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.767084 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.767098 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.767111 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.767125 | orchestrator | 2025-03-27 01:20:48.767139 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-27 01:20:48.767153 | orchestrator | Thursday 27 March 2025 01:11:50 +0000 (0:00:02.833) 0:00:03.709 ******** 2025-03-27 01:20:48.767167 | orchestrator | changed: [testbed-manager] 2025-03-27 01:20:48.767181 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767194 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.767208 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.767222 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.767236 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.767254 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.767268 | orchestrator | 2025-03-27 01:20:48.767282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-27 01:20:48.767296 | orchestrator | Thursday 27 March 2025 01:11:53 +0000 (0:00:03.109) 0:00:06.818 ******** 2025-03-27 01:20:48.767310 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-03-27 01:20:48.767324 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-03-27 01:20:48.767338 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-03-27 01:20:48.767351 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-03-27 01:20:48.767366 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-03-27 01:20:48.767380 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-03-27 01:20:48.767393 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-03-27 01:20:48.767407 | orchestrator | 2025-03-27 01:20:48.767421 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-03-27 01:20:48.767442 | orchestrator | 2025-03-27 01:20:48.767456 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-03-27 01:20:48.767470 | orchestrator | Thursday 27 March 2025 01:11:57 +0000 (0:00:03.088) 0:00:09.907 ******** 2025-03-27 01:20:48.767484 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.767498 | orchestrator | 2025-03-27 01:20:48.767512 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-03-27 01:20:48.767539 | orchestrator | Thursday 27 March 2025 01:11:58 +0000 (0:00:01.626) 0:00:11.533 ******** 2025-03-27 01:20:48.767554 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-03-27 01:20:48.767599 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-03-27 01:20:48.767614 | orchestrator | 2025-03-27 01:20:48.767628 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-03-27 01:20:48.767642 | orchestrator | Thursday 27 March 2025 01:12:04 +0000 (0:00:05.491) 0:00:17.025 ******** 2025-03-27 01:20:48.767656 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:20:48.767670 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-27 01:20:48.767683 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767697 | orchestrator | 2025-03-27 01:20:48.767710 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-03-27 01:20:48.767724 | orchestrator | Thursday 27 March 2025 01:12:09 +0000 (0:00:05.412) 0:00:22.438 ******** 2025-03-27 01:20:48.767737 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767751 | orchestrator | 2025-03-27 01:20:48.767765 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-03-27 01:20:48.767778 | orchestrator | Thursday 27 March 2025 01:12:11 +0000 (0:00:01.490) 0:00:23.928 ******** 2025-03-27 01:20:48.767792 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767805 | orchestrator | 2025-03-27 01:20:48.767818 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-03-27 01:20:48.767832 | orchestrator | Thursday 27 March 2025 01:12:13 +0000 (0:00:02.486) 0:00:26.415 ******** 2025-03-27 01:20:48.767845 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.767859 | orchestrator | 2025-03-27 01:20:48.767872 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-27 01:20:48.767885 | orchestrator | Thursday 27 March 2025 01:12:16 +0000 (0:00:03.294) 0:00:29.710 ******** 2025-03-27 01:20:48.767899 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.767913 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.767926 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.767940 | orchestrator | 2025-03-27 01:20:48.767953 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-03-27 01:20:48.767966 | orchestrator | Thursday 27 March 2025 01:12:17 +0000 (0:00:00.815) 0:00:30.525 ******** 2025-03-27 01:20:48.767980 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.767994 | orchestrator | 2025-03-27 01:20:48.768007 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-03-27 01:20:48.768021 | orchestrator | Thursday 27 March 2025 01:12:51 +0000 (0:00:33.968) 0:01:04.494 ******** 2025-03-27 01:20:48.768034 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.768048 | orchestrator | 2025-03-27 01:20:48.768066 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-27 01:20:48.768080 | orchestrator | Thursday 27 March 2025 01:13:06 +0000 (0:00:14.944) 0:01:19.439 ******** 2025-03-27 01:20:48.768093 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.768107 | orchestrator | 2025-03-27 01:20:48.768121 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-27 01:20:48.768134 | orchestrator | Thursday 27 March 2025 01:13:18 +0000 (0:00:11.650) 0:01:31.089 ******** 2025-03-27 01:20:48.768148 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.768162 | orchestrator | 2025-03-27 01:20:48.768175 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-03-27 01:20:48.768189 | orchestrator | Thursday 27 March 2025 01:13:20 +0000 (0:00:02.356) 0:01:33.445 ******** 2025-03-27 01:20:48.768209 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.768223 | orchestrator | 2025-03-27 01:20:48.768236 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-27 01:20:48.768250 | orchestrator | Thursday 27 March 2025 01:13:21 +0000 (0:00:01.029) 0:01:34.475 ******** 2025-03-27 01:20:48.768264 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.768277 | orchestrator | 2025-03-27 01:20:48.768291 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-03-27 01:20:48.768304 | orchestrator | Thursday 27 March 2025 01:13:22 +0000 (0:00:00.829) 0:01:35.305 ******** 2025-03-27 01:20:48.768318 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.768331 | orchestrator | 2025-03-27 01:20:48.768345 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-03-27 01:20:48.768358 | orchestrator | Thursday 27 March 2025 01:13:41 +0000 (0:00:18.579) 0:01:53.884 ******** 2025-03-27 01:20:48.768372 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.768386 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.768399 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.768413 | orchestrator | 2025-03-27 01:20:48.768426 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-03-27 01:20:48.768439 | orchestrator | 2025-03-27 01:20:48.768453 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-03-27 01:20:48.768467 | orchestrator | Thursday 27 March 2025 01:13:41 +0000 (0:00:00.675) 0:01:54.560 ******** 2025-03-27 01:20:48.768480 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.768494 | orchestrator | 2025-03-27 01:20:48.768507 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-03-27 01:20:48.768521 | orchestrator | Thursday 27 March 2025 01:13:43 +0000 (0:00:01.949) 0:01:56.510 ******** 2025-03-27 01:20:48.768534 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.768548 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.768611 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.768628 | orchestrator | 2025-03-27 01:20:48.769168 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-03-27 01:20:48.769207 | orchestrator | Thursday 27 March 2025 01:13:46 +0000 (0:00:02.549) 0:01:59.059 ******** 2025-03-27 01:20:48.769221 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.769235 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.769249 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.769263 | orchestrator | 2025-03-27 01:20:48.769277 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-03-27 01:20:48.769300 | orchestrator | Thursday 27 March 2025 01:13:48 +0000 (0:00:02.547) 0:02:01.607 ******** 2025-03-27 01:20:48.769314 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.769328 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.769342 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.769355 | orchestrator | 2025-03-27 01:20:48.769369 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-03-27 01:20:48.769383 | orchestrator | Thursday 27 March 2025 01:13:49 +0000 (0:00:00.617) 0:02:02.224 ******** 2025-03-27 01:20:48.769396 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-27 01:20:48.769410 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.769424 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-27 01:20:48.769511 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.769529 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-03-27 01:20:48.770169 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-03-27 01:20:48.770184 | orchestrator | 2025-03-27 01:20:48.770267 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-03-27 01:20:48.770283 | orchestrator | Thursday 27 March 2025 01:13:59 +0000 (0:00:10.383) 0:02:12.607 ******** 2025-03-27 01:20:48.770310 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.770358 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.770374 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.770389 | orchestrator | 2025-03-27 01:20:48.771067 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-03-27 01:20:48.771100 | orchestrator | Thursday 27 March 2025 01:14:00 +0000 (0:00:00.552) 0:02:13.159 ******** 2025-03-27 01:20:48.771114 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-03-27 01:20:48.771128 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.771142 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-03-27 01:20:48.771190 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.771206 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-03-27 01:20:48.771220 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.771233 | orchestrator | 2025-03-27 01:20:48.771247 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-03-27 01:20:48.771261 | orchestrator | Thursday 27 March 2025 01:14:01 +0000 (0:00:00.828) 0:02:13.988 ******** 2025-03-27 01:20:48.771275 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.771357 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.771374 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.771388 | orchestrator | 2025-03-27 01:20:48.771401 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-03-27 01:20:48.771415 | orchestrator | Thursday 27 March 2025 01:14:01 +0000 (0:00:00.504) 0:02:14.492 ******** 2025-03-27 01:20:48.771429 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.771442 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.771456 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.771469 | orchestrator | 2025-03-27 01:20:48.771492 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-03-27 01:20:48.771507 | orchestrator | Thursday 27 March 2025 01:14:02 +0000 (0:00:00.989) 0:02:15.482 ******** 2025-03-27 01:20:48.771521 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.771535 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.771549 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.771633 | orchestrator | 2025-03-27 01:20:48.771652 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-03-27 01:20:48.771668 | orchestrator | Thursday 27 March 2025 01:14:04 +0000 (0:00:02.248) 0:02:17.731 ******** 2025-03-27 01:20:48.771684 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772182 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772195 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.772208 | orchestrator | 2025-03-27 01:20:48.772220 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-27 01:20:48.772232 | orchestrator | Thursday 27 March 2025 01:14:26 +0000 (0:00:21.416) 0:02:39.148 ******** 2025-03-27 01:20:48.772244 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772257 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772269 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.772281 | orchestrator | 2025-03-27 01:20:48.772293 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-27 01:20:48.772305 | orchestrator | Thursday 27 March 2025 01:14:38 +0000 (0:00:12.240) 0:02:51.389 ******** 2025-03-27 01:20:48.772317 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.772337 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772349 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772362 | orchestrator | 2025-03-27 01:20:48.772374 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-03-27 01:20:48.772386 | orchestrator | Thursday 27 March 2025 01:14:39 +0000 (0:00:01.333) 0:02:52.722 ******** 2025-03-27 01:20:48.772477 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772490 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772502 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.772514 | orchestrator | 2025-03-27 01:20:48.772539 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-03-27 01:20:48.772551 | orchestrator | Thursday 27 March 2025 01:14:52 +0000 (0:00:12.397) 0:03:05.120 ******** 2025-03-27 01:20:48.772584 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.772597 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772609 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772622 | orchestrator | 2025-03-27 01:20:48.772634 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-03-27 01:20:48.772647 | orchestrator | Thursday 27 March 2025 01:14:53 +0000 (0:00:01.622) 0:03:06.742 ******** 2025-03-27 01:20:48.772895 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.772911 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.772924 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.772936 | orchestrator | 2025-03-27 01:20:48.772949 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-03-27 01:20:48.772961 | orchestrator | 2025-03-27 01:20:48.772973 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-27 01:20:48.772985 | orchestrator | Thursday 27 March 2025 01:14:54 +0000 (0:00:00.494) 0:03:07.236 ******** 2025-03-27 01:20:48.773072 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.773090 | orchestrator | 2025-03-27 01:20:48.773103 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-03-27 01:20:48.773115 | orchestrator | Thursday 27 March 2025 01:14:55 +0000 (0:00:00.944) 0:03:08.181 ******** 2025-03-27 01:20:48.773128 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-03-27 01:20:48.773140 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-03-27 01:20:48.773152 | orchestrator | 2025-03-27 01:20:48.773165 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-03-27 01:20:48.773177 | orchestrator | Thursday 27 March 2025 01:14:58 +0000 (0:00:03.474) 0:03:11.656 ******** 2025-03-27 01:20:48.773189 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-03-27 01:20:48.773203 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-03-27 01:20:48.773215 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-03-27 01:20:48.773229 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-03-27 01:20:48.773241 | orchestrator | 2025-03-27 01:20:48.773254 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-03-27 01:20:48.773266 | orchestrator | Thursday 27 March 2025 01:15:06 +0000 (0:00:07.784) 0:03:19.440 ******** 2025-03-27 01:20:48.773278 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-03-27 01:20:48.773291 | orchestrator | 2025-03-27 01:20:48.773303 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-03-27 01:20:48.773315 | orchestrator | Thursday 27 March 2025 01:15:10 +0000 (0:00:03.794) 0:03:23.235 ******** 2025-03-27 01:20:48.773327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-03-27 01:20:48.773340 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-03-27 01:20:48.773352 | orchestrator | 2025-03-27 01:20:48.773364 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-03-27 01:20:48.773377 | orchestrator | Thursday 27 March 2025 01:15:14 +0000 (0:00:04.426) 0:03:27.661 ******** 2025-03-27 01:20:48.773389 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-03-27 01:20:48.773401 | orchestrator | 2025-03-27 01:20:48.773413 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-03-27 01:20:48.773426 | orchestrator | Thursday 27 March 2025 01:15:18 +0000 (0:00:03.773) 0:03:31.435 ******** 2025-03-27 01:20:48.773438 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-03-27 01:20:48.773461 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-03-27 01:20:48.773473 | orchestrator | 2025-03-27 01:20:48.773485 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-03-27 01:20:48.773498 | orchestrator | Thursday 27 March 2025 01:15:27 +0000 (0:00:09.126) 0:03:40.562 ******** 2025-03-27 01:20:48.773514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.773641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.773665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.773687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.773701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.773715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.773803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.773824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.773837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.773850 | orchestrator | 2025-03-27 01:20:48.773863 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-03-27 01:20:48.773887 | orchestrator | Thursday 27 March 2025 01:15:29 +0000 (0:00:01.949) 0:03:42.511 ******** 2025-03-27 01:20:48.773907 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.773920 | orchestrator | 2025-03-27 01:20:48.773933 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-03-27 01:20:48.773945 | orchestrator | Thursday 27 March 2025 01:15:29 +0000 (0:00:00.129) 0:03:42.641 ******** 2025-03-27 01:20:48.773958 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.773990 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.774003 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.774056 | orchestrator | 2025-03-27 01:20:48.774072 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-03-27 01:20:48.774085 | orchestrator | Thursday 27 March 2025 01:15:30 +0000 (0:00:00.489) 0:03:43.130 ******** 2025-03-27 01:20:48.774097 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-27 01:20:48.774110 | orchestrator | 2025-03-27 01:20:48.774122 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-03-27 01:20:48.774134 | orchestrator | Thursday 27 March 2025 01:15:30 +0000 (0:00:00.402) 0:03:43.533 ******** 2025-03-27 01:20:48.774147 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.774159 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.774171 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.774184 | orchestrator | 2025-03-27 01:20:48.774196 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-03-27 01:20:48.774208 | orchestrator | Thursday 27 March 2025 01:15:31 +0000 (0:00:00.485) 0:03:44.018 ******** 2025-03-27 01:20:48.774220 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.774233 | orchestrator | 2025-03-27 01:20:48.774246 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-03-27 01:20:48.774258 | orchestrator | Thursday 27 March 2025 01:15:31 +0000 (0:00:00.731) 0:03:44.749 ******** 2025-03-27 01:20:48.774271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.774369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.774399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.774414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.774428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.774512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.774531 | orchestrator | 2025-03-27 01:20:48.774544 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-03-27 01:20:48.774557 | orchestrator | Thursday 27 March 2025 01:15:34 +0000 (0:00:02.964) 0:03:47.714 ******** 2025-03-27 01:20:48.774591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.774611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.774624 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.774637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.774659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.774672 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.774751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.774807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.774821 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.774834 | orchestrator | 2025-03-27 01:20:48.774846 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-03-27 01:20:48.774859 | orchestrator | Thursday 27 March 2025 01:15:35 +0000 (0:00:00.850) 0:03:48.564 ******** 2025-03-27 01:20:48.774871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.774885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.774897 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.774976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.775015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775030 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.775043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.775057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775070 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.775083 | orchestrator | 2025-03-27 01:20:48.775096 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-03-27 01:20:48.775109 | orchestrator | Thursday 27 March 2025 01:15:37 +0000 (0:00:01.264) 0:03:49.828 ******** 2025-03-27 01:20:48.775187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775425 | orchestrator | 2025-03-27 01:20:48.775438 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-03-27 01:20:48.775468 | orchestrator | Thursday 27 March 2025 01:15:39 +0000 (0:00:02.790) 0:03:52.619 ******** 2025-03-27 01:20:48.775544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.775690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.775857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.775870 | orchestrator | 2025-03-27 01:20:48.775883 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-03-27 01:20:48.775895 | orchestrator | Thursday 27 March 2025 01:15:46 +0000 (0:00:06.600) 0:03:59.219 ******** 2025-03-27 01:20:48.775908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.775988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776022 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.776045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.776059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776091 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.776152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-27 01:20:48.776176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776197 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.776208 | orchestrator | 2025-03-27 01:20:48.776218 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-03-27 01:20:48.776228 | orchestrator | Thursday 27 March 2025 01:15:47 +0000 (0:00:00.849) 0:04:00.069 ******** 2025-03-27 01:20:48.776239 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.776249 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.776259 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.776269 | orchestrator | 2025-03-27 01:20:48.776279 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-03-27 01:20:48.776302 | orchestrator | Thursday 27 March 2025 01:15:49 +0000 (0:00:01.874) 0:04:01.943 ******** 2025-03-27 01:20:48.776312 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.776322 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.776332 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.776342 | orchestrator | 2025-03-27 01:20:48.776352 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-03-27 01:20:48.776362 | orchestrator | Thursday 27 March 2025 01:15:49 +0000 (0:00:00.492) 0:04:02.436 ******** 2025-03-27 01:20:48.776373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.776453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.776480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-27 01:20:48.776491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.776508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.776598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.776636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.776648 | orchestrator | 2025-03-27 01:20:48.776660 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-27 01:20:48.776671 | orchestrator | Thursday 27 March 2025 01:15:51 +0000 (0:00:02.380) 0:04:04.816 ******** 2025-03-27 01:20:48.776682 | orchestrator | 2025-03-27 01:20:48.776693 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-27 01:20:48.776704 | orchestrator | Thursday 27 March 2025 01:15:52 +0000 (0:00:00.281) 0:04:05.097 ******** 2025-03-27 01:20:48.776715 | orchestrator | 2025-03-27 01:20:48.776729 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-03-27 01:20:48.776746 | orchestrator | Thursday 27 March 2025 01:15:52 +0000 (0:00:00.110) 0:04:05.208 ******** 2025-03-27 01:20:48.776756 | orchestrator | 2025-03-27 01:20:48.776766 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-03-27 01:20:48.776777 | orchestrator | Thursday 27 March 2025 01:15:52 +0000 (0:00:00.268) 0:04:05.477 ******** 2025-03-27 01:20:48.776787 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.776797 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.776808 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.776818 | orchestrator | 2025-03-27 01:20:48.776828 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-03-27 01:20:48.776838 | orchestrator | Thursday 27 March 2025 01:16:11 +0000 (0:00:19.190) 0:04:24.667 ******** 2025-03-27 01:20:48.776848 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.776859 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.776869 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.776879 | orchestrator | 2025-03-27 01:20:48.776889 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-03-27 01:20:48.776899 | orchestrator | 2025-03-27 01:20:48.776909 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-27 01:20:48.776932 | orchestrator | Thursday 27 March 2025 01:16:22 +0000 (0:00:10.688) 0:04:35.356 ******** 2025-03-27 01:20:48.776942 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.776954 | orchestrator | 2025-03-27 01:20:48.776964 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-27 01:20:48.776973 | orchestrator | Thursday 27 March 2025 01:16:24 +0000 (0:00:01.584) 0:04:36.940 ******** 2025-03-27 01:20:48.776983 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.776993 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.777003 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.777013 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.777023 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.777033 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.777043 | orchestrator | 2025-03-27 01:20:48.777053 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-03-27 01:20:48.777063 | orchestrator | Thursday 27 March 2025 01:16:24 +0000 (0:00:00.794) 0:04:37.734 ******** 2025-03-27 01:20:48.777073 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.777083 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.777092 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.777102 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:20:48.777112 | orchestrator | 2025-03-27 01:20:48.777122 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-27 01:20:48.777132 | orchestrator | Thursday 27 March 2025 01:16:26 +0000 (0:00:01.338) 0:04:39.073 ******** 2025-03-27 01:20:48.777142 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-03-27 01:20:48.777152 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-03-27 01:20:48.777162 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-03-27 01:20:48.777172 | orchestrator | 2025-03-27 01:20:48.777236 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-27 01:20:48.777251 | orchestrator | Thursday 27 March 2025 01:16:26 +0000 (0:00:00.681) 0:04:39.754 ******** 2025-03-27 01:20:48.777262 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-03-27 01:20:48.777272 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-03-27 01:20:48.777282 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-03-27 01:20:48.777292 | orchestrator | 2025-03-27 01:20:48.777303 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-27 01:20:48.777313 | orchestrator | Thursday 27 March 2025 01:16:28 +0000 (0:00:01.372) 0:04:41.126 ******** 2025-03-27 01:20:48.777333 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-03-27 01:20:48.777344 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.777354 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-03-27 01:20:48.777364 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.777378 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-03-27 01:20:48.777388 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.777398 | orchestrator | 2025-03-27 01:20:48.777408 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-03-27 01:20:48.777418 | orchestrator | Thursday 27 March 2025 01:16:29 +0000 (0:00:00.877) 0:04:42.004 ******** 2025-03-27 01:20:48.777428 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-27 01:20:48.777438 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-27 01:20:48.777448 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-27 01:20:48.777458 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-27 01:20:48.777468 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.777478 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-03-27 01:20:48.777488 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-27 01:20:48.777497 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-27 01:20:48.777507 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.777517 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-27 01:20:48.777527 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-27 01:20:48.777537 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.777547 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-27 01:20:48.777575 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-27 01:20:48.777586 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-03-27 01:20:48.777596 | orchestrator | 2025-03-27 01:20:48.777606 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-03-27 01:20:48.777616 | orchestrator | Thursday 27 March 2025 01:16:30 +0000 (0:00:01.134) 0:04:43.138 ******** 2025-03-27 01:20:48.777626 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.777636 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.777646 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.777656 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.777666 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.777676 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.777686 | orchestrator | 2025-03-27 01:20:48.777696 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-03-27 01:20:48.777706 | orchestrator | Thursday 27 March 2025 01:16:31 +0000 (0:00:01.216) 0:04:44.355 ******** 2025-03-27 01:20:48.777716 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.777726 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.777736 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.777746 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.777756 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.777766 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.777776 | orchestrator | 2025-03-27 01:20:48.777785 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-03-27 01:20:48.777795 | orchestrator | Thursday 27 March 2025 01:16:33 +0000 (0:00:01.820) 0:04:46.175 ******** 2025-03-27 01:20:48.777806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.777874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.777890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.777902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.777920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.777939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.778079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.778111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.778237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.778248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.778767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.778800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.778930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.778989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779027 | orchestrator | 2025-03-27 01:20:48.779036 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-27 01:20:48.779045 | orchestrator | Thursday 27 March 2025 01:16:36 +0000 (0:00:02.693) 0:04:48.869 ******** 2025-03-27 01:20:48.779054 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-27 01:20:48.779067 | orchestrator | 2025-03-27 01:20:48.779076 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-03-27 01:20:48.779085 | orchestrator | Thursday 27 March 2025 01:16:37 +0000 (0:00:01.564) 0:04:50.433 ******** 2025-03-27 01:20:48.779094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.779415 | orchestrator | 2025-03-27 01:20:48.779424 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-03-27 01:20:48.779443 | orchestrator | Thursday 27 March 2025 01:16:41 +0000 (0:00:04.276) 0:04:54.710 ******** 2025-03-27 01:20:48.779453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.779467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.779476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779484 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.779501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.779555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.779583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779592 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.779606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.779615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.779632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779641 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.779650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.779714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779729 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.779738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.779752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779762 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.779772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.779781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779801 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.779810 | orchestrator | 2025-03-27 01:20:48.779818 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-03-27 01:20:48.779827 | orchestrator | Thursday 27 March 2025 01:16:43 +0000 (0:00:01.896) 0:04:56.606 ******** 2025-03-27 01:20:48.779836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.779888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.779910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779920 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.779930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.779947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.779956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.779965 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.780005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.780016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.780030 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.780039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.780048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.780063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.780073 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.780082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.780110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.780125 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.780134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.780143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.780152 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.780161 | orchestrator | 2025-03-27 01:20:48.780170 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-27 01:20:48.780178 | orchestrator | Thursday 27 March 2025 01:16:46 +0000 (0:00:02.356) 0:04:58.963 ******** 2025-03-27 01:20:48.780187 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.780196 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.780204 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.780213 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-27 01:20:48.780222 | orchestrator | 2025-03-27 01:20:48.780230 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-03-27 01:20:48.780239 | orchestrator | Thursday 27 March 2025 01:16:47 +0000 (0:00:01.194) 0:05:00.157 ******** 2025-03-27 01:20:48.780247 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 01:20:48.780256 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-27 01:20:48.780264 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-27 01:20:48.780273 | orchestrator | 2025-03-27 01:20:48.780282 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-03-27 01:20:48.780290 | orchestrator | Thursday 27 March 2025 01:16:48 +0000 (0:00:00.849) 0:05:01.007 ******** 2025-03-27 01:20:48.780299 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 01:20:48.780307 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-27 01:20:48.780316 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-27 01:20:48.780324 | orchestrator | 2025-03-27 01:20:48.780333 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-03-27 01:20:48.780341 | orchestrator | Thursday 27 March 2025 01:16:49 +0000 (0:00:00.814) 0:05:01.822 ******** 2025-03-27 01:20:48.780350 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:20:48.780358 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:20:48.780367 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:20:48.780376 | orchestrator | 2025-03-27 01:20:48.780384 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-03-27 01:20:48.780393 | orchestrator | Thursday 27 March 2025 01:16:49 +0000 (0:00:00.909) 0:05:02.731 ******** 2025-03-27 01:20:48.780402 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:20:48.780410 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:20:48.780418 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:20:48.780431 | orchestrator | 2025-03-27 01:20:48.780440 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-03-27 01:20:48.780448 | orchestrator | Thursday 27 March 2025 01:16:50 +0000 (0:00:00.352) 0:05:03.083 ******** 2025-03-27 01:20:48.780457 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-27 01:20:48.780465 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-27 01:20:48.780474 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-27 01:20:48.780483 | orchestrator | 2025-03-27 01:20:48.780491 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-03-27 01:20:48.780500 | orchestrator | Thursday 27 March 2025 01:16:51 +0000 (0:00:01.463) 0:05:04.547 ******** 2025-03-27 01:20:48.780508 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-27 01:20:48.780517 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-27 01:20:48.780527 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-27 01:20:48.780536 | orchestrator | 2025-03-27 01:20:48.780547 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-03-27 01:20:48.780556 | orchestrator | Thursday 27 March 2025 01:16:53 +0000 (0:00:01.409) 0:05:05.957 ******** 2025-03-27 01:20:48.780582 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-03-27 01:20:48.780596 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-03-27 01:20:48.780606 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-03-27 01:20:48.780638 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-03-27 01:20:48.780649 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-03-27 01:20:48.780659 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-03-27 01:20:48.780669 | orchestrator | 2025-03-27 01:20:48.780679 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-03-27 01:20:48.780688 | orchestrator | Thursday 27 March 2025 01:16:59 +0000 (0:00:05.956) 0:05:11.914 ******** 2025-03-27 01:20:48.780698 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.780707 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.780717 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.780726 | orchestrator | 2025-03-27 01:20:48.780736 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-03-27 01:20:48.780746 | orchestrator | Thursday 27 March 2025 01:16:59 +0000 (0:00:00.313) 0:05:12.227 ******** 2025-03-27 01:20:48.780755 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.780765 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.780775 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.780784 | orchestrator | 2025-03-27 01:20:48.780794 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-03-27 01:20:48.780804 | orchestrator | Thursday 27 March 2025 01:16:59 +0000 (0:00:00.489) 0:05:12.717 ******** 2025-03-27 01:20:48.780813 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.780823 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.780833 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.780847 | orchestrator | 2025-03-27 01:20:48.780857 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-03-27 01:20:48.780867 | orchestrator | Thursday 27 March 2025 01:17:01 +0000 (0:00:01.612) 0:05:14.329 ******** 2025-03-27 01:20:48.780876 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-27 01:20:48.780885 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-27 01:20:48.780897 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-03-27 01:20:48.780905 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-27 01:20:48.780919 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-27 01:20:48.780928 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-03-27 01:20:48.780936 | orchestrator | 2025-03-27 01:20:48.780945 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-03-27 01:20:48.780954 | orchestrator | Thursday 27 March 2025 01:17:05 +0000 (0:00:03.651) 0:05:17.981 ******** 2025-03-27 01:20:48.780962 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 01:20:48.780971 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 01:20:48.780980 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 01:20:48.780988 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-27 01:20:48.780997 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.781006 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-27 01:20:48.781015 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.781024 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-27 01:20:48.781033 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.781041 | orchestrator | 2025-03-27 01:20:48.781050 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-03-27 01:20:48.781058 | orchestrator | Thursday 27 March 2025 01:17:08 +0000 (0:00:03.623) 0:05:21.604 ******** 2025-03-27 01:20:48.781067 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.781075 | orchestrator | 2025-03-27 01:20:48.781084 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-03-27 01:20:48.781092 | orchestrator | Thursday 27 March 2025 01:17:08 +0000 (0:00:00.152) 0:05:21.757 ******** 2025-03-27 01:20:48.781100 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.781109 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.781118 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.781127 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.781135 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.781143 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.781152 | orchestrator | 2025-03-27 01:20:48.781160 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-03-27 01:20:48.781169 | orchestrator | Thursday 27 March 2025 01:17:09 +0000 (0:00:00.953) 0:05:22.711 ******** 2025-03-27 01:20:48.781177 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-27 01:20:48.781186 | orchestrator | 2025-03-27 01:20:48.781194 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-03-27 01:20:48.781203 | orchestrator | Thursday 27 March 2025 01:17:10 +0000 (0:00:00.484) 0:05:23.195 ******** 2025-03-27 01:20:48.781211 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.781220 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.781228 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.781237 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.781245 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.781254 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.781262 | orchestrator | 2025-03-27 01:20:48.781271 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-03-27 01:20:48.781279 | orchestrator | Thursday 27 March 2025 01:17:11 +0000 (0:00:01.032) 0:05:24.228 ******** 2025-03-27 01:20:48.781314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.781329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.781339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.781348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.781357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.781384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.781399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.781739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.781769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.781957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.781981 | orchestrator | 2025-03-27 01:20:48.781990 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-03-27 01:20:48.781998 | orchestrator | Thursday 27 March 2025 01:17:15 +0000 (0:00:03.891) 0:05:28.119 ******** 2025-03-27 01:20:48.782007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.782362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.782376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.782572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.782581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.782700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.782719 | orchestrator | 2025-03-27 01:20:48.782728 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-03-27 01:20:48.782737 | orchestrator | Thursday 27 March 2025 01:17:23 +0000 (0:00:07.925) 0:05:36.045 ******** 2025-03-27 01:20:48.782746 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.782754 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.782762 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.782771 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.782779 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.782792 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.782800 | orchestrator | 2025-03-27 01:20:48.782809 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-03-27 01:20:48.782817 | orchestrator | Thursday 27 March 2025 01:17:25 +0000 (0:00:01.860) 0:05:37.905 ******** 2025-03-27 01:20:48.782826 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-27 01:20:48.782834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-27 01:20:48.782842 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-27 01:20:48.782851 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.782859 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-03-27 01:20:48.782868 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-27 01:20:48.782876 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.782884 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-27 01:20:48.782893 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-03-27 01:20:48.782901 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.782910 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-27 01:20:48.782918 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-03-27 01:20:48.782932 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-27 01:20:48.782941 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-27 01:20:48.782950 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-03-27 01:20:48.782958 | orchestrator | 2025-03-27 01:20:48.782981 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-03-27 01:20:48.782990 | orchestrator | Thursday 27 March 2025 01:17:30 +0000 (0:00:05.838) 0:05:43.744 ******** 2025-03-27 01:20:48.782999 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.783008 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.783016 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.783025 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783033 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783042 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783050 | orchestrator | 2025-03-27 01:20:48.783059 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-03-27 01:20:48.783067 | orchestrator | Thursday 27 March 2025 01:17:31 +0000 (0:00:00.947) 0:05:44.691 ******** 2025-03-27 01:20:48.783076 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-27 01:20:48.783088 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-27 01:20:48.783096 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-03-27 01:20:48.783105 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783133 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783143 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-27 01:20:48.783152 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783160 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-27 01:20:48.783173 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783181 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783190 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-03-27 01:20:48.783198 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783207 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783215 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-03-27 01:20:48.783223 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783232 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783240 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783249 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783257 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783266 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783274 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-03-27 01:20:48.783283 | orchestrator | 2025-03-27 01:20:48.783291 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-03-27 01:20:48.783300 | orchestrator | Thursday 27 March 2025 01:17:40 +0000 (0:00:08.289) 0:05:52.980 ******** 2025-03-27 01:20:48.783308 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 01:20:48.783317 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 01:20:48.783325 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-03-27 01:20:48.783334 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-27 01:20:48.783342 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-27 01:20:48.783350 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 01:20:48.783359 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-03-27 01:20:48.783367 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:20:48.783375 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:20:48.783384 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 01:20:48.783392 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-03-27 01:20:48.783401 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-27 01:20:48.783409 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-27 01:20:48.783417 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783425 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-27 01:20:48.783434 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783442 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-03-27 01:20:48.783450 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783459 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:20:48.783471 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:20:48.783480 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-03-27 01:20:48.783488 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:20:48.783496 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:20:48.783504 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-03-27 01:20:48.783513 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:20:48.783522 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:20:48.783549 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-03-27 01:20:48.783602 | orchestrator | 2025-03-27 01:20:48.783612 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-03-27 01:20:48.783621 | orchestrator | Thursday 27 March 2025 01:17:51 +0000 (0:00:11.044) 0:06:04.025 ******** 2025-03-27 01:20:48.783630 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.783638 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.783647 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.783655 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783664 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783672 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783681 | orchestrator | 2025-03-27 01:20:48.783690 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-03-27 01:20:48.783698 | orchestrator | Thursday 27 March 2025 01:17:51 +0000 (0:00:00.773) 0:06:04.799 ******** 2025-03-27 01:20:48.783706 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.783715 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.783723 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.783732 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783740 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783749 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783761 | orchestrator | 2025-03-27 01:20:48.783770 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-03-27 01:20:48.783779 | orchestrator | Thursday 27 March 2025 01:17:53 +0000 (0:00:01.147) 0:06:05.947 ******** 2025-03-27 01:20:48.783787 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.783796 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.783804 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.783813 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.783822 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.783830 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.783839 | orchestrator | 2025-03-27 01:20:48.783848 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-03-27 01:20:48.783857 | orchestrator | Thursday 27 March 2025 01:17:56 +0000 (0:00:02.922) 0:06:08.869 ******** 2025-03-27 01:20:48.783865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.783881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.783896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.783929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.783939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.783948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.783957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.783966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.783985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.783995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784033 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.784042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784097 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.784127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784210 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.784218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784386 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.784394 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.784407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784492 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.784500 | orchestrator | 2025-03-27 01:20:48.784508 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-03-27 01:20:48.784516 | orchestrator | Thursday 27 March 2025 01:17:58 +0000 (0:00:02.651) 0:06:11.520 ******** 2025-03-27 01:20:48.784524 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-03-27 01:20:48.784536 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784544 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.784552 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-03-27 01:20:48.784576 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784584 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.784592 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-03-27 01:20:48.784600 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784608 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.784616 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-03-27 01:20:48.784624 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784631 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.784639 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-03-27 01:20:48.784647 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784655 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.784663 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-03-27 01:20:48.784671 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-03-27 01:20:48.784679 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.784687 | orchestrator | 2025-03-27 01:20:48.784695 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-03-27 01:20:48.784703 | orchestrator | Thursday 27 March 2025 01:17:59 +0000 (0:00:00.899) 0:06:12.420 ******** 2025-03-27 01:20:48.784711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-03-27 01:20:48.784767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-03-27 01:20:48.784776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.784943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.784961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.784977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.784991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.785063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-03-27 01:20:48.785088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-27 01:20:48.785108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-03-27 01:20:48.785207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-03-27 01:20:48.785215 | orchestrator | 2025-03-27 01:20:48.785226 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-03-27 01:20:48.785234 | orchestrator | Thursday 27 March 2025 01:18:03 +0000 (0:00:04.221) 0:06:16.642 ******** 2025-03-27 01:20:48.785242 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.785250 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.785258 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.785266 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.785274 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.785282 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.785290 | orchestrator | 2025-03-27 01:20:48.785297 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785305 | orchestrator | Thursday 27 March 2025 01:18:04 +0000 (0:00:00.751) 0:06:17.394 ******** 2025-03-27 01:20:48.785313 | orchestrator | 2025-03-27 01:20:48.785321 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785329 | orchestrator | Thursday 27 March 2025 01:18:04 +0000 (0:00:00.307) 0:06:17.701 ******** 2025-03-27 01:20:48.785337 | orchestrator | 2025-03-27 01:20:48.785345 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785353 | orchestrator | Thursday 27 March 2025 01:18:04 +0000 (0:00:00.107) 0:06:17.809 ******** 2025-03-27 01:20:48.785361 | orchestrator | 2025-03-27 01:20:48.785368 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785376 | orchestrator | Thursday 27 March 2025 01:18:05 +0000 (0:00:00.316) 0:06:18.126 ******** 2025-03-27 01:20:48.785384 | orchestrator | 2025-03-27 01:20:48.785392 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785400 | orchestrator | Thursday 27 March 2025 01:18:05 +0000 (0:00:00.122) 0:06:18.248 ******** 2025-03-27 01:20:48.785407 | orchestrator | 2025-03-27 01:20:48.785415 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-03-27 01:20:48.785423 | orchestrator | Thursday 27 March 2025 01:18:05 +0000 (0:00:00.311) 0:06:18.560 ******** 2025-03-27 01:20:48.785431 | orchestrator | 2025-03-27 01:20:48.785439 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-03-27 01:20:48.785447 | orchestrator | Thursday 27 March 2025 01:18:05 +0000 (0:00:00.120) 0:06:18.681 ******** 2025-03-27 01:20:48.785454 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.785462 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.785470 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.785478 | orchestrator | 2025-03-27 01:20:48.785486 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-03-27 01:20:48.785498 | orchestrator | Thursday 27 March 2025 01:18:14 +0000 (0:00:08.349) 0:06:27.031 ******** 2025-03-27 01:20:48.785507 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.785515 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.785522 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.785530 | orchestrator | 2025-03-27 01:20:48.785538 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-03-27 01:20:48.785546 | orchestrator | Thursday 27 March 2025 01:18:25 +0000 (0:00:11.328) 0:06:38.359 ******** 2025-03-27 01:20:48.785554 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.785573 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.785581 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.785589 | orchestrator | 2025-03-27 01:20:48.785597 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-03-27 01:20:48.785605 | orchestrator | Thursday 27 March 2025 01:18:46 +0000 (0:00:21.086) 0:06:59.446 ******** 2025-03-27 01:20:48.785613 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.785621 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.785628 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.785636 | orchestrator | 2025-03-27 01:20:48.785644 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-03-27 01:20:48.785652 | orchestrator | Thursday 27 March 2025 01:19:10 +0000 (0:00:23.953) 0:07:23.399 ******** 2025-03-27 01:20:48.785660 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.785668 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.785675 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.785683 | orchestrator | 2025-03-27 01:20:48.785691 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-03-27 01:20:48.785699 | orchestrator | Thursday 27 March 2025 01:19:11 +0000 (0:00:01.109) 0:07:24.509 ******** 2025-03-27 01:20:48.785707 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.785715 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.785722 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.785730 | orchestrator | 2025-03-27 01:20:48.785739 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-03-27 01:20:48.785747 | orchestrator | Thursday 27 March 2025 01:19:12 +0000 (0:00:00.802) 0:07:25.312 ******** 2025-03-27 01:20:48.785755 | orchestrator | changed: [testbed-node-3] 2025-03-27 01:20:48.785763 | orchestrator | changed: [testbed-node-4] 2025-03-27 01:20:48.785770 | orchestrator | changed: [testbed-node-5] 2025-03-27 01:20:48.785778 | orchestrator | 2025-03-27 01:20:48.785786 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-03-27 01:20:48.785794 | orchestrator | Thursday 27 March 2025 01:19:33 +0000 (0:00:20.982) 0:07:46.294 ******** 2025-03-27 01:20:48.785802 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.785810 | orchestrator | 2025-03-27 01:20:48.785821 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-03-27 01:20:48.785829 | orchestrator | Thursday 27 March 2025 01:19:33 +0000 (0:00:00.119) 0:07:46.414 ******** 2025-03-27 01:20:48.785837 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.785845 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.785853 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.785861 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.785869 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.785880 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-03-27 01:20:48.785888 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:20:48.785896 | orchestrator | 2025-03-27 01:20:48.785907 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-03-27 01:20:48.785915 | orchestrator | Thursday 27 March 2025 01:19:56 +0000 (0:00:22.806) 0:08:09.220 ******** 2025-03-27 01:20:48.785923 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.785931 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.785944 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.785952 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.785960 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.785968 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.785975 | orchestrator | 2025-03-27 01:20:48.785984 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-03-27 01:20:48.785992 | orchestrator | Thursday 27 March 2025 01:20:08 +0000 (0:00:12.572) 0:08:21.792 ******** 2025-03-27 01:20:48.786000 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.786008 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.786063 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.786073 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.786081 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.786089 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-03-27 01:20:48.786097 | orchestrator | 2025-03-27 01:20:48.786105 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-03-27 01:20:48.786113 | orchestrator | Thursday 27 March 2025 01:20:13 +0000 (0:00:04.487) 0:08:26.280 ******** 2025-03-27 01:20:48.786121 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:20:48.786128 | orchestrator | 2025-03-27 01:20:48.786136 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-03-27 01:20:48.786144 | orchestrator | Thursday 27 March 2025 01:20:25 +0000 (0:00:11.577) 0:08:37.857 ******** 2025-03-27 01:20:48.786152 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:20:48.786160 | orchestrator | 2025-03-27 01:20:48.786168 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-03-27 01:20:48.786176 | orchestrator | Thursday 27 March 2025 01:20:26 +0000 (0:00:01.396) 0:08:39.253 ******** 2025-03-27 01:20:48.786184 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.786192 | orchestrator | 2025-03-27 01:20:48.786200 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-03-27 01:20:48.786207 | orchestrator | Thursday 27 March 2025 01:20:28 +0000 (0:00:01.636) 0:08:40.890 ******** 2025-03-27 01:20:48.786215 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-03-27 01:20:48.786223 | orchestrator | 2025-03-27 01:20:48.786231 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-03-27 01:20:48.786239 | orchestrator | Thursday 27 March 2025 01:20:38 +0000 (0:00:10.068) 0:08:50.959 ******** 2025-03-27 01:20:48.786247 | orchestrator | ok: [testbed-node-3] 2025-03-27 01:20:48.786255 | orchestrator | ok: [testbed-node-4] 2025-03-27 01:20:48.786263 | orchestrator | ok: [testbed-node-5] 2025-03-27 01:20:48.786270 | orchestrator | ok: [testbed-node-0] 2025-03-27 01:20:48.786278 | orchestrator | ok: [testbed-node-1] 2025-03-27 01:20:48.786286 | orchestrator | ok: [testbed-node-2] 2025-03-27 01:20:48.786294 | orchestrator | 2025-03-27 01:20:48.786302 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-03-27 01:20:48.786310 | orchestrator | 2025-03-27 01:20:48.786317 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-03-27 01:20:48.786325 | orchestrator | Thursday 27 March 2025 01:20:40 +0000 (0:00:02.344) 0:08:53.303 ******** 2025-03-27 01:20:48.786333 | orchestrator | changed: [testbed-node-0] 2025-03-27 01:20:48.786341 | orchestrator | changed: [testbed-node-1] 2025-03-27 01:20:48.786349 | orchestrator | changed: [testbed-node-2] 2025-03-27 01:20:48.786357 | orchestrator | 2025-03-27 01:20:48.786365 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-03-27 01:20:48.786373 | orchestrator | 2025-03-27 01:20:48.786380 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-03-27 01:20:48.786388 | orchestrator | Thursday 27 March 2025 01:20:41 +0000 (0:00:01.191) 0:08:54.495 ******** 2025-03-27 01:20:48.786396 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.786404 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.786420 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.786428 | orchestrator | 2025-03-27 01:20:48.786436 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-03-27 01:20:48.786444 | orchestrator | 2025-03-27 01:20:48.786452 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-03-27 01:20:48.786460 | orchestrator | Thursday 27 March 2025 01:20:42 +0000 (0:00:00.874) 0:08:55.370 ******** 2025-03-27 01:20:48.786468 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-03-27 01:20:48.786476 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-03-27 01:20:48.786483 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786491 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-03-27 01:20:48.786499 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-03-27 01:20:48.786507 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786515 | orchestrator | skipping: [testbed-node-3] 2025-03-27 01:20:48.786523 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-03-27 01:20:48.786531 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-03-27 01:20:48.786539 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786546 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-03-27 01:20:48.786554 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-03-27 01:20:48.786574 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786582 | orchestrator | skipping: [testbed-node-4] 2025-03-27 01:20:48.786590 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-03-27 01:20:48.786598 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-03-27 01:20:48.786607 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786622 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-03-27 01:20:48.786631 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-03-27 01:20:48.786638 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786646 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-03-27 01:20:48.786654 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-03-27 01:20:48.786662 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786670 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-03-27 01:20:48.786678 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-03-27 01:20:48.786685 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786693 | orchestrator | skipping: [testbed-node-5] 2025-03-27 01:20:48.786701 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-03-27 01:20:48.786709 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-03-27 01:20:48.786717 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786725 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-03-27 01:20:48.786733 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-03-27 01:20:48.786741 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786749 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.786756 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.786764 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-03-27 01:20:48.786772 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-03-27 01:20:48.786780 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-03-27 01:20:48.786788 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-03-27 01:20:48.786795 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-03-27 01:20:48.786807 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-03-27 01:20:48.786815 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.786823 | orchestrator | 2025-03-27 01:20:48.786831 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-03-27 01:20:48.786839 | orchestrator | 2025-03-27 01:20:48.786846 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-03-27 01:20:48.786854 | orchestrator | Thursday 27 March 2025 01:20:44 +0000 (0:00:01.764) 0:08:57.134 ******** 2025-03-27 01:20:48.786862 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-03-27 01:20:48.786870 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-03-27 01:20:48.786878 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.786885 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-03-27 01:20:48.786893 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-03-27 01:20:48.786901 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.786909 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-03-27 01:20:48.786917 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-03-27 01:20:48.786925 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.786933 | orchestrator | 2025-03-27 01:20:48.786941 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-03-27 01:20:48.786948 | orchestrator | 2025-03-27 01:20:48.786956 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-03-27 01:20:48.786964 | orchestrator | Thursday 27 March 2025 01:20:45 +0000 (0:00:00.872) 0:08:58.007 ******** 2025-03-27 01:20:48.786972 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.786979 | orchestrator | 2025-03-27 01:20:48.786990 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-03-27 01:20:48.786998 | orchestrator | 2025-03-27 01:20:48.787006 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-03-27 01:20:48.787014 | orchestrator | Thursday 27 March 2025 01:20:46 +0000 (0:00:01.198) 0:08:59.206 ******** 2025-03-27 01:20:48.787022 | orchestrator | skipping: [testbed-node-0] 2025-03-27 01:20:48.787030 | orchestrator | skipping: [testbed-node-1] 2025-03-27 01:20:48.787038 | orchestrator | skipping: [testbed-node-2] 2025-03-27 01:20:48.787045 | orchestrator | 2025-03-27 01:20:48.787053 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-27 01:20:48.787061 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-27 01:20:48.787069 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-03-27 01:20:48.787078 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-03-27 01:20:48.787086 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-03-27 01:20:48.787094 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-03-27 01:20:48.787102 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-03-27 01:20:48.787110 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-03-27 01:20:48.787118 | orchestrator | 2025-03-27 01:20:48.787125 | orchestrator | 2025-03-27 01:20:48.787136 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-27 01:20:51.828016 | orchestrator | Thursday 27 March 2025 01:20:47 +0000 (0:00:00.870) 0:09:00.077 ******** 2025-03-27 01:20:51.828170 | orchestrator | =============================================================================== 2025-03-27 01:20:51.828188 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.97s 2025-03-27 01:20:51.828204 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 23.95s 2025-03-27 01:20:51.828220 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.81s 2025-03-27 01:20:51.828234 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.42s 2025-03-27 01:20:51.828249 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.09s 2025-03-27 01:20:51.828263 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.98s 2025-03-27 01:20:51.828278 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.19s 2025-03-27 01:20:51.828293 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.58s 2025-03-27 01:20:51.828307 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.94s 2025-03-27 01:20:51.828321 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.57s 2025-03-27 01:20:51.828336 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.40s 2025-03-27 01:20:51.828350 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.24s 2025-03-27 01:20:51.828364 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.65s 2025-03-27 01:20:51.828379 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.58s 2025-03-27 01:20:51.828393 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.33s 2025-03-27 01:20:51.828408 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 11.04s 2025-03-27 01:20:51.828422 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.69s 2025-03-27 01:20:51.828436 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.38s 2025-03-27 01:20:51.828451 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.07s 2025-03-27 01:20:51.828465 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 9.13s 2025-03-27 01:20:51.828480 | orchestrator | 2025-03-27 01:20:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:51.828496 | orchestrator | 2025-03-27 01:20:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:51.828526 | orchestrator | 2025-03-27 01:20:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:54.882618 | orchestrator | 2025-03-27 01:20:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:54.882746 | orchestrator | 2025-03-27 01:20:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:20:57.933556 | orchestrator | 2025-03-27 01:20:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:20:57.933709 | orchestrator | 2025-03-27 01:20:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:00.982899 | orchestrator | 2025-03-27 01:20:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:00.983029 | orchestrator | 2025-03-27 01:21:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:04.035386 | orchestrator | 2025-03-27 01:21:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:04.035482 | orchestrator | 2025-03-27 01:21:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:07.084257 | orchestrator | 2025-03-27 01:21:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:07.084395 | orchestrator | 2025-03-27 01:21:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:10.129059 | orchestrator | 2025-03-27 01:21:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:10.129201 | orchestrator | 2025-03-27 01:21:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:13.175509 | orchestrator | 2025-03-27 01:21:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:13.175718 | orchestrator | 2025-03-27 01:21:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:16.218315 | orchestrator | 2025-03-27 01:21:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:16.218436 | orchestrator | 2025-03-27 01:21:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:19.268847 | orchestrator | 2025-03-27 01:21:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:19.268978 | orchestrator | 2025-03-27 01:21:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:22.323644 | orchestrator | 2025-03-27 01:21:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:22.323779 | orchestrator | 2025-03-27 01:21:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:25.374507 | orchestrator | 2025-03-27 01:21:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:25.374688 | orchestrator | 2025-03-27 01:21:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:28.427503 | orchestrator | 2025-03-27 01:21:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:28.428018 | orchestrator | 2025-03-27 01:21:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:31.500447 | orchestrator | 2025-03-27 01:21:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:31.500626 | orchestrator | 2025-03-27 01:21:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:34.553887 | orchestrator | 2025-03-27 01:21:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:34.554067 | orchestrator | 2025-03-27 01:21:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:37.604512 | orchestrator | 2025-03-27 01:21:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:37.604663 | orchestrator | 2025-03-27 01:21:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:40.649273 | orchestrator | 2025-03-27 01:21:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:40.649415 | orchestrator | 2025-03-27 01:21:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:43.691071 | orchestrator | 2025-03-27 01:21:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:43.691196 | orchestrator | 2025-03-27 01:21:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:46.742948 | orchestrator | 2025-03-27 01:21:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:46.743117 | orchestrator | 2025-03-27 01:21:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:49.798934 | orchestrator | 2025-03-27 01:21:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:49.799072 | orchestrator | 2025-03-27 01:21:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:52.845942 | orchestrator | 2025-03-27 01:21:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:52.846132 | orchestrator | 2025-03-27 01:21:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:55.899676 | orchestrator | 2025-03-27 01:21:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:55.899803 | orchestrator | 2025-03-27 01:21:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:21:58.948697 | orchestrator | 2025-03-27 01:21:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:21:58.948835 | orchestrator | 2025-03-27 01:21:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:02.004245 | orchestrator | 2025-03-27 01:21:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:02.004367 | orchestrator | 2025-03-27 01:22:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:05.053997 | orchestrator | 2025-03-27 01:22:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:05.054187 | orchestrator | 2025-03-27 01:22:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:08.104809 | orchestrator | 2025-03-27 01:22:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:08.104916 | orchestrator | 2025-03-27 01:22:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:11.155547 | orchestrator | 2025-03-27 01:22:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:11.155732 | orchestrator | 2025-03-27 01:22:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:14.211794 | orchestrator | 2025-03-27 01:22:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:14.211923 | orchestrator | 2025-03-27 01:22:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:17.274495 | orchestrator | 2025-03-27 01:22:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:17.274672 | orchestrator | 2025-03-27 01:22:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:20.327467 | orchestrator | 2025-03-27 01:22:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:20.327646 | orchestrator | 2025-03-27 01:22:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:23.379653 | orchestrator | 2025-03-27 01:22:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:23.379791 | orchestrator | 2025-03-27 01:22:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:26.428979 | orchestrator | 2025-03-27 01:22:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:26.429109 | orchestrator | 2025-03-27 01:22:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:29.477474 | orchestrator | 2025-03-27 01:22:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:29.477640 | orchestrator | 2025-03-27 01:22:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:32.539526 | orchestrator | 2025-03-27 01:22:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:32.539740 | orchestrator | 2025-03-27 01:22:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:35.590068 | orchestrator | 2025-03-27 01:22:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:35.590202 | orchestrator | 2025-03-27 01:22:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:38.646284 | orchestrator | 2025-03-27 01:22:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:38.646422 | orchestrator | 2025-03-27 01:22:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:41.699455 | orchestrator | 2025-03-27 01:22:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:41.699640 | orchestrator | 2025-03-27 01:22:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:44.755052 | orchestrator | 2025-03-27 01:22:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:44.755181 | orchestrator | 2025-03-27 01:22:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:47.812524 | orchestrator | 2025-03-27 01:22:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:47.812698 | orchestrator | 2025-03-27 01:22:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:50.862985 | orchestrator | 2025-03-27 01:22:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:50.863114 | orchestrator | 2025-03-27 01:22:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:53.909746 | orchestrator | 2025-03-27 01:22:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:53.909858 | orchestrator | 2025-03-27 01:22:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:22:56.958002 | orchestrator | 2025-03-27 01:22:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:22:56.958190 | orchestrator | 2025-03-27 01:22:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:00.011161 | orchestrator | 2025-03-27 01:22:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:00.011258 | orchestrator | 2025-03-27 01:23:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:03.058338 | orchestrator | 2025-03-27 01:23:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:03.058616 | orchestrator | 2025-03-27 01:23:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:06.100527 | orchestrator | 2025-03-27 01:23:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:06.100725 | orchestrator | 2025-03-27 01:23:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:09.152563 | orchestrator | 2025-03-27 01:23:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:09.152734 | orchestrator | 2025-03-27 01:23:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:12.208524 | orchestrator | 2025-03-27 01:23:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:12.208706 | orchestrator | 2025-03-27 01:23:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:15.256222 | orchestrator | 2025-03-27 01:23:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:15.256357 | orchestrator | 2025-03-27 01:23:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:18.302221 | orchestrator | 2025-03-27 01:23:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:18.302362 | orchestrator | 2025-03-27 01:23:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:21.362605 | orchestrator | 2025-03-27 01:23:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:21.362731 | orchestrator | 2025-03-27 01:23:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:24.413095 | orchestrator | 2025-03-27 01:23:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:24.413237 | orchestrator | 2025-03-27 01:23:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:27.466683 | orchestrator | 2025-03-27 01:23:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:27.466809 | orchestrator | 2025-03-27 01:23:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:30.519040 | orchestrator | 2025-03-27 01:23:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:30.519172 | orchestrator | 2025-03-27 01:23:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:33.563830 | orchestrator | 2025-03-27 01:23:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:33.563960 | orchestrator | 2025-03-27 01:23:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:36.615986 | orchestrator | 2025-03-27 01:23:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:36.616126 | orchestrator | 2025-03-27 01:23:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:39.661016 | orchestrator | 2025-03-27 01:23:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:39.661155 | orchestrator | 2025-03-27 01:23:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:42.713740 | orchestrator | 2025-03-27 01:23:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:42.713867 | orchestrator | 2025-03-27 01:23:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:45.768017 | orchestrator | 2025-03-27 01:23:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:45.768150 | orchestrator | 2025-03-27 01:23:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:48.821726 | orchestrator | 2025-03-27 01:23:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:48.821871 | orchestrator | 2025-03-27 01:23:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:51.876304 | orchestrator | 2025-03-27 01:23:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:51.876464 | orchestrator | 2025-03-27 01:23:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:54.923320 | orchestrator | 2025-03-27 01:23:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:54.923455 | orchestrator | 2025-03-27 01:23:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:23:57.971768 | orchestrator | 2025-03-27 01:23:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:23:57.971905 | orchestrator | 2025-03-27 01:23:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:01.023985 | orchestrator | 2025-03-27 01:23:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:01.024116 | orchestrator | 2025-03-27 01:24:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:04.076707 | orchestrator | 2025-03-27 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:04.076832 | orchestrator | 2025-03-27 01:24:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:07.133892 | orchestrator | 2025-03-27 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:07.134078 | orchestrator | 2025-03-27 01:24:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:10.179705 | orchestrator | 2025-03-27 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:10.179841 | orchestrator | 2025-03-27 01:24:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:13.228284 | orchestrator | 2025-03-27 01:24:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:13.228416 | orchestrator | 2025-03-27 01:24:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:16.277827 | orchestrator | 2025-03-27 01:24:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:16.277966 | orchestrator | 2025-03-27 01:24:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:19.325897 | orchestrator | 2025-03-27 01:24:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:19.326090 | orchestrator | 2025-03-27 01:24:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:22.379111 | orchestrator | 2025-03-27 01:24:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:22.379220 | orchestrator | 2025-03-27 01:24:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:25.432115 | orchestrator | 2025-03-27 01:24:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:25.432257 | orchestrator | 2025-03-27 01:24:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:28.479627 | orchestrator | 2025-03-27 01:24:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:28.479775 | orchestrator | 2025-03-27 01:24:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:31.525043 | orchestrator | 2025-03-27 01:24:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:31.525169 | orchestrator | 2025-03-27 01:24:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:34.574673 | orchestrator | 2025-03-27 01:24:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:34.574807 | orchestrator | 2025-03-27 01:24:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:37.628065 | orchestrator | 2025-03-27 01:24:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:37.628202 | orchestrator | 2025-03-27 01:24:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:40.685111 | orchestrator | 2025-03-27 01:24:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:40.685250 | orchestrator | 2025-03-27 01:24:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:43.732414 | orchestrator | 2025-03-27 01:24:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:43.732547 | orchestrator | 2025-03-27 01:24:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:46.791677 | orchestrator | 2025-03-27 01:24:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:46.791803 | orchestrator | 2025-03-27 01:24:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:49.831835 | orchestrator | 2025-03-27 01:24:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:49.831953 | orchestrator | 2025-03-27 01:24:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:52.875624 | orchestrator | 2025-03-27 01:24:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:52.875766 | orchestrator | 2025-03-27 01:24:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:55.928630 | orchestrator | 2025-03-27 01:24:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:55.928754 | orchestrator | 2025-03-27 01:24:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:24:58.979241 | orchestrator | 2025-03-27 01:24:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:24:58.979366 | orchestrator | 2025-03-27 01:24:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:02.031745 | orchestrator | 2025-03-27 01:24:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:02.031864 | orchestrator | 2025-03-27 01:25:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:05.080338 | orchestrator | 2025-03-27 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:05.080470 | orchestrator | 2025-03-27 01:25:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:08.124765 | orchestrator | 2025-03-27 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:08.124914 | orchestrator | 2025-03-27 01:25:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:11.174228 | orchestrator | 2025-03-27 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:11.174367 | orchestrator | 2025-03-27 01:25:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:14.236567 | orchestrator | 2025-03-27 01:25:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:14.236724 | orchestrator | 2025-03-27 01:25:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:17.296664 | orchestrator | 2025-03-27 01:25:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:17.296786 | orchestrator | 2025-03-27 01:25:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:20.359811 | orchestrator | 2025-03-27 01:25:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:20.359949 | orchestrator | 2025-03-27 01:25:20 | INFO  | Task e7f38369-2291-4c01-8046-a1c2670b3945 is in state STARTED 2025-03-27 01:25:20.361305 | orchestrator | 2025-03-27 01:25:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:23.420252 | orchestrator | 2025-03-27 01:25:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:23.420398 | orchestrator | 2025-03-27 01:25:23 | INFO  | Task e7f38369-2291-4c01-8046-a1c2670b3945 is in state STARTED 2025-03-27 01:25:23.420817 | orchestrator | 2025-03-27 01:25:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:26.482967 | orchestrator | 2025-03-27 01:25:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:26.483095 | orchestrator | 2025-03-27 01:25:26 | INFO  | Task e7f38369-2291-4c01-8046-a1c2670b3945 is in state STARTED 2025-03-27 01:25:26.483484 | orchestrator | 2025-03-27 01:25:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:29.526525 | orchestrator | 2025-03-27 01:25:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:29.526688 | orchestrator | 2025-03-27 01:25:29 | INFO  | Task e7f38369-2291-4c01-8046-a1c2670b3945 is in state STARTED 2025-03-27 01:25:29.530852 | orchestrator | 2025-03-27 01:25:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:32.578840 | orchestrator | 2025-03-27 01:25:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:32.578964 | orchestrator | 2025-03-27 01:25:32 | INFO  | Task e7f38369-2291-4c01-8046-a1c2670b3945 is in state SUCCESS 2025-03-27 01:25:32.580037 | orchestrator | 2025-03-27 01:25:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:32.580785 | orchestrator | 2025-03-27 01:25:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:35.633343 | orchestrator | 2025-03-27 01:25:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:38.683452 | orchestrator | 2025-03-27 01:25:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:38.683636 | orchestrator | 2025-03-27 01:25:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:41.734939 | orchestrator | 2025-03-27 01:25:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:41.735097 | orchestrator | 2025-03-27 01:25:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:44.784156 | orchestrator | 2025-03-27 01:25:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:44.784277 | orchestrator | 2025-03-27 01:25:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:47.836957 | orchestrator | 2025-03-27 01:25:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:47.837098 | orchestrator | 2025-03-27 01:25:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:50.886192 | orchestrator | 2025-03-27 01:25:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:50.886329 | orchestrator | 2025-03-27 01:25:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:53.933006 | orchestrator | 2025-03-27 01:25:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:53.933141 | orchestrator | 2025-03-27 01:25:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:25:56.975175 | orchestrator | 2025-03-27 01:25:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:25:56.975299 | orchestrator | 2025-03-27 01:25:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:00.028213 | orchestrator | 2025-03-27 01:25:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:00.028360 | orchestrator | 2025-03-27 01:26:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:03.080916 | orchestrator | 2025-03-27 01:26:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:03.081053 | orchestrator | 2025-03-27 01:26:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:06.137173 | orchestrator | 2025-03-27 01:26:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:06.137312 | orchestrator | 2025-03-27 01:26:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:09.191136 | orchestrator | 2025-03-27 01:26:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:09.191294 | orchestrator | 2025-03-27 01:26:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:12.241983 | orchestrator | 2025-03-27 01:26:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:12.242182 | orchestrator | 2025-03-27 01:26:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:15.296160 | orchestrator | 2025-03-27 01:26:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:15.296287 | orchestrator | 2025-03-27 01:26:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:18.353951 | orchestrator | 2025-03-27 01:26:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:18.354151 | orchestrator | 2025-03-27 01:26:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:21.409673 | orchestrator | 2025-03-27 01:26:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:21.409800 | orchestrator | 2025-03-27 01:26:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:24.461498 | orchestrator | 2025-03-27 01:26:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:24.461692 | orchestrator | 2025-03-27 01:26:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:27.524242 | orchestrator | 2025-03-27 01:26:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:27.524381 | orchestrator | 2025-03-27 01:26:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:27.524753 | orchestrator | 2025-03-27 01:26:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:30.573251 | orchestrator | 2025-03-27 01:26:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:33.627951 | orchestrator | 2025-03-27 01:26:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:33.628082 | orchestrator | 2025-03-27 01:26:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:36.679038 | orchestrator | 2025-03-27 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:36.679164 | orchestrator | 2025-03-27 01:26:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:39.727019 | orchestrator | 2025-03-27 01:26:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:39.727157 | orchestrator | 2025-03-27 01:26:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:42.777535 | orchestrator | 2025-03-27 01:26:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:42.777723 | orchestrator | 2025-03-27 01:26:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:45.823424 | orchestrator | 2025-03-27 01:26:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:45.823547 | orchestrator | 2025-03-27 01:26:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:48.871802 | orchestrator | 2025-03-27 01:26:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:48.871935 | orchestrator | 2025-03-27 01:26:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:51.929291 | orchestrator | 2025-03-27 01:26:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:51.929419 | orchestrator | 2025-03-27 01:26:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:54.983781 | orchestrator | 2025-03-27 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:54.983918 | orchestrator | 2025-03-27 01:26:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:26:58.039875 | orchestrator | 2025-03-27 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:26:58.040008 | orchestrator | 2025-03-27 01:26:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:01.087635 | orchestrator | 2025-03-27 01:26:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:01.087775 | orchestrator | 2025-03-27 01:27:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:04.139459 | orchestrator | 2025-03-27 01:27:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:04.139630 | orchestrator | 2025-03-27 01:27:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:07.197815 | orchestrator | 2025-03-27 01:27:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:07.197950 | orchestrator | 2025-03-27 01:27:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:10.243172 | orchestrator | 2025-03-27 01:27:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:10.243302 | orchestrator | 2025-03-27 01:27:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:13.301644 | orchestrator | 2025-03-27 01:27:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:13.301776 | orchestrator | 2025-03-27 01:27:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:16.343228 | orchestrator | 2025-03-27 01:27:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:16.343351 | orchestrator | 2025-03-27 01:27:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:19.395671 | orchestrator | 2025-03-27 01:27:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:19.395834 | orchestrator | 2025-03-27 01:27:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:22.454271 | orchestrator | 2025-03-27 01:27:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:22.454413 | orchestrator | 2025-03-27 01:27:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:25.500443 | orchestrator | 2025-03-27 01:27:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:25.500748 | orchestrator | 2025-03-27 01:27:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:28.554524 | orchestrator | 2025-03-27 01:27:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:28.554770 | orchestrator | 2025-03-27 01:27:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:31.595475 | orchestrator | 2025-03-27 01:27:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:31.595652 | orchestrator | 2025-03-27 01:27:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:34.651738 | orchestrator | 2025-03-27 01:27:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:34.651859 | orchestrator | 2025-03-27 01:27:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:37.709079 | orchestrator | 2025-03-27 01:27:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:37.709205 | orchestrator | 2025-03-27 01:27:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:40.765226 | orchestrator | 2025-03-27 01:27:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:40.765360 | orchestrator | 2025-03-27 01:27:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:43.819195 | orchestrator | 2025-03-27 01:27:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:43.819324 | orchestrator | 2025-03-27 01:27:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:46.881645 | orchestrator | 2025-03-27 01:27:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:46.881774 | orchestrator | 2025-03-27 01:27:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:49.932859 | orchestrator | 2025-03-27 01:27:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:49.933004 | orchestrator | 2025-03-27 01:27:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:52.989568 | orchestrator | 2025-03-27 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:52.989748 | orchestrator | 2025-03-27 01:27:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:56.044862 | orchestrator | 2025-03-27 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:56.045043 | orchestrator | 2025-03-27 01:27:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:27:59.099647 | orchestrator | 2025-03-27 01:27:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:27:59.099787 | orchestrator | 2025-03-27 01:27:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:02.154289 | orchestrator | 2025-03-27 01:27:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:02.154427 | orchestrator | 2025-03-27 01:28:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:05.209053 | orchestrator | 2025-03-27 01:28:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:05.209183 | orchestrator | 2025-03-27 01:28:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:08.259456 | orchestrator | 2025-03-27 01:28:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:08.259639 | orchestrator | 2025-03-27 01:28:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:11.310066 | orchestrator | 2025-03-27 01:28:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:11.310218 | orchestrator | 2025-03-27 01:28:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:14.358534 | orchestrator | 2025-03-27 01:28:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:14.358822 | orchestrator | 2025-03-27 01:28:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:17.409750 | orchestrator | 2025-03-27 01:28:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:17.409877 | orchestrator | 2025-03-27 01:28:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:20.466328 | orchestrator | 2025-03-27 01:28:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:20.466474 | orchestrator | 2025-03-27 01:28:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:23.512955 | orchestrator | 2025-03-27 01:28:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:23.513083 | orchestrator | 2025-03-27 01:28:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:26.555484 | orchestrator | 2025-03-27 01:28:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:26.555666 | orchestrator | 2025-03-27 01:28:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:29.599232 | orchestrator | 2025-03-27 01:28:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:29.599372 | orchestrator | 2025-03-27 01:28:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:32.658722 | orchestrator | 2025-03-27 01:28:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:32.658856 | orchestrator | 2025-03-27 01:28:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:35.706687 | orchestrator | 2025-03-27 01:28:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:35.706817 | orchestrator | 2025-03-27 01:28:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:38.771501 | orchestrator | 2025-03-27 01:28:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:38.771682 | orchestrator | 2025-03-27 01:28:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:41.825054 | orchestrator | 2025-03-27 01:28:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:41.825207 | orchestrator | 2025-03-27 01:28:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:44.875108 | orchestrator | 2025-03-27 01:28:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:44.875240 | orchestrator | 2025-03-27 01:28:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:47.930261 | orchestrator | 2025-03-27 01:28:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:47.930422 | orchestrator | 2025-03-27 01:28:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:50.982375 | orchestrator | 2025-03-27 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:50.982504 | orchestrator | 2025-03-27 01:28:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:54.038083 | orchestrator | 2025-03-27 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:54.038221 | orchestrator | 2025-03-27 01:28:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:28:57.087818 | orchestrator | 2025-03-27 01:28:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:28:57.087952 | orchestrator | 2025-03-27 01:28:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:00.141143 | orchestrator | 2025-03-27 01:28:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:00.141286 | orchestrator | 2025-03-27 01:29:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:03.193366 | orchestrator | 2025-03-27 01:29:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:03.193509 | orchestrator | 2025-03-27 01:29:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:06.242922 | orchestrator | 2025-03-27 01:29:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:06.243060 | orchestrator | 2025-03-27 01:29:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:09.292383 | orchestrator | 2025-03-27 01:29:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:09.292507 | orchestrator | 2025-03-27 01:29:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:12.344012 | orchestrator | 2025-03-27 01:29:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:12.344161 | orchestrator | 2025-03-27 01:29:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:15.396963 | orchestrator | 2025-03-27 01:29:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:15.397100 | orchestrator | 2025-03-27 01:29:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:18.445962 | orchestrator | 2025-03-27 01:29:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:18.446105 | orchestrator | 2025-03-27 01:29:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:21.496286 | orchestrator | 2025-03-27 01:29:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:21.496417 | orchestrator | 2025-03-27 01:29:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:24.555130 | orchestrator | 2025-03-27 01:29:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:24.555266 | orchestrator | 2025-03-27 01:29:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:27.603869 | orchestrator | 2025-03-27 01:29:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:27.604001 | orchestrator | 2025-03-27 01:29:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:30.658257 | orchestrator | 2025-03-27 01:29:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:30.658386 | orchestrator | 2025-03-27 01:29:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:33.714262 | orchestrator | 2025-03-27 01:29:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:33.714395 | orchestrator | 2025-03-27 01:29:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:36.771520 | orchestrator | 2025-03-27 01:29:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:36.771712 | orchestrator | 2025-03-27 01:29:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:39.830346 | orchestrator | 2025-03-27 01:29:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:39.830479 | orchestrator | 2025-03-27 01:29:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:42.879187 | orchestrator | 2025-03-27 01:29:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:42.879322 | orchestrator | 2025-03-27 01:29:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:45.926327 | orchestrator | 2025-03-27 01:29:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:45.926463 | orchestrator | 2025-03-27 01:29:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:48.979222 | orchestrator | 2025-03-27 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:48.979359 | orchestrator | 2025-03-27 01:29:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:52.034304 | orchestrator | 2025-03-27 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:52.034443 | orchestrator | 2025-03-27 01:29:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:55.079279 | orchestrator | 2025-03-27 01:29:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:55.079411 | orchestrator | 2025-03-27 01:29:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:29:58.129730 | orchestrator | 2025-03-27 01:29:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:29:58.129868 | orchestrator | 2025-03-27 01:29:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:01.184487 | orchestrator | 2025-03-27 01:29:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:01.184627 | orchestrator | 2025-03-27 01:30:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:04.229516 | orchestrator | 2025-03-27 01:30:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:04.229779 | orchestrator | 2025-03-27 01:30:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:07.279406 | orchestrator | 2025-03-27 01:30:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:07.279540 | orchestrator | 2025-03-27 01:30:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:10.327065 | orchestrator | 2025-03-27 01:30:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:10.327202 | orchestrator | 2025-03-27 01:30:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:13.380303 | orchestrator | 2025-03-27 01:30:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:13.380428 | orchestrator | 2025-03-27 01:30:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:16.424403 | orchestrator | 2025-03-27 01:30:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:16.424523 | orchestrator | 2025-03-27 01:30:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:19.478417 | orchestrator | 2025-03-27 01:30:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:19.478563 | orchestrator | 2025-03-27 01:30:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:22.526669 | orchestrator | 2025-03-27 01:30:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:22.526802 | orchestrator | 2025-03-27 01:30:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:25.587036 | orchestrator | 2025-03-27 01:30:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:25.587172 | orchestrator | 2025-03-27 01:30:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:28.647995 | orchestrator | 2025-03-27 01:30:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:28.648115 | orchestrator | 2025-03-27 01:30:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:31.702522 | orchestrator | 2025-03-27 01:30:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:31.702705 | orchestrator | 2025-03-27 01:30:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:34.757087 | orchestrator | 2025-03-27 01:30:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:34.757212 | orchestrator | 2025-03-27 01:30:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:37.812211 | orchestrator | 2025-03-27 01:30:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:37.812344 | orchestrator | 2025-03-27 01:30:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:40.864703 | orchestrator | 2025-03-27 01:30:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:40.864843 | orchestrator | 2025-03-27 01:30:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:43.918973 | orchestrator | 2025-03-27 01:30:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:43.919118 | orchestrator | 2025-03-27 01:30:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:46.974394 | orchestrator | 2025-03-27 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:46.974526 | orchestrator | 2025-03-27 01:30:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:50.022514 | orchestrator | 2025-03-27 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:50.022696 | orchestrator | 2025-03-27 01:30:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:53.080051 | orchestrator | 2025-03-27 01:30:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:53.080195 | orchestrator | 2025-03-27 01:30:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:56.127182 | orchestrator | 2025-03-27 01:30:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:56.127311 | orchestrator | 2025-03-27 01:30:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:30:59.176682 | orchestrator | 2025-03-27 01:30:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:30:59.176806 | orchestrator | 2025-03-27 01:30:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:02.228892 | orchestrator | 2025-03-27 01:30:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:02.229017 | orchestrator | 2025-03-27 01:31:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:05.282742 | orchestrator | 2025-03-27 01:31:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:05.282869 | orchestrator | 2025-03-27 01:31:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:08.329351 | orchestrator | 2025-03-27 01:31:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:08.329474 | orchestrator | 2025-03-27 01:31:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:11.388950 | orchestrator | 2025-03-27 01:31:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:11.389078 | orchestrator | 2025-03-27 01:31:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:14.437189 | orchestrator | 2025-03-27 01:31:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:14.437333 | orchestrator | 2025-03-27 01:31:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:17.485664 | orchestrator | 2025-03-27 01:31:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:17.485792 | orchestrator | 2025-03-27 01:31:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:20.538964 | orchestrator | 2025-03-27 01:31:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:20.539116 | orchestrator | 2025-03-27 01:31:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:23.583522 | orchestrator | 2025-03-27 01:31:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:23.583681 | orchestrator | 2025-03-27 01:31:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:26.624567 | orchestrator | 2025-03-27 01:31:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:26.624732 | orchestrator | 2025-03-27 01:31:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:29.668024 | orchestrator | 2025-03-27 01:31:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:29.668154 | orchestrator | 2025-03-27 01:31:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:32.710906 | orchestrator | 2025-03-27 01:31:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:32.711064 | orchestrator | 2025-03-27 01:31:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:35.760302 | orchestrator | 2025-03-27 01:31:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:35.760436 | orchestrator | 2025-03-27 01:31:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:38.810741 | orchestrator | 2025-03-27 01:31:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:38.810885 | orchestrator | 2025-03-27 01:31:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:41.865694 | orchestrator | 2025-03-27 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:41.865828 | orchestrator | 2025-03-27 01:31:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:44.915960 | orchestrator | 2025-03-27 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:44.916099 | orchestrator | 2025-03-27 01:31:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:47.969252 | orchestrator | 2025-03-27 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:47.969381 | orchestrator | 2025-03-27 01:31:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:51.023697 | orchestrator | 2025-03-27 01:31:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:51.023823 | orchestrator | 2025-03-27 01:31:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:54.074476 | orchestrator | 2025-03-27 01:31:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:54.074596 | orchestrator | 2025-03-27 01:31:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:31:57.130897 | orchestrator | 2025-03-27 01:31:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:31:57.131083 | orchestrator | 2025-03-27 01:31:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:00.183653 | orchestrator | 2025-03-27 01:31:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:00.183804 | orchestrator | 2025-03-27 01:32:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:03.231565 | orchestrator | 2025-03-27 01:32:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:03.231756 | orchestrator | 2025-03-27 01:32:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:06.286782 | orchestrator | 2025-03-27 01:32:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:06.286910 | orchestrator | 2025-03-27 01:32:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:09.332125 | orchestrator | 2025-03-27 01:32:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:09.332263 | orchestrator | 2025-03-27 01:32:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:12.376540 | orchestrator | 2025-03-27 01:32:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:12.376814 | orchestrator | 2025-03-27 01:32:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:15.434879 | orchestrator | 2025-03-27 01:32:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:15.435010 | orchestrator | 2025-03-27 01:32:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:18.495713 | orchestrator | 2025-03-27 01:32:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:18.495845 | orchestrator | 2025-03-27 01:32:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:21.556707 | orchestrator | 2025-03-27 01:32:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:21.556843 | orchestrator | 2025-03-27 01:32:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:24.614115 | orchestrator | 2025-03-27 01:32:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:24.614236 | orchestrator | 2025-03-27 01:32:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:27.670249 | orchestrator | 2025-03-27 01:32:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:27.670389 | orchestrator | 2025-03-27 01:32:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:30.721050 | orchestrator | 2025-03-27 01:32:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:30.721176 | orchestrator | 2025-03-27 01:32:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:33.773039 | orchestrator | 2025-03-27 01:32:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:33.773185 | orchestrator | 2025-03-27 01:32:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:36.831797 | orchestrator | 2025-03-27 01:32:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:36.831939 | orchestrator | 2025-03-27 01:32:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:39.882910 | orchestrator | 2025-03-27 01:32:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:39.883059 | orchestrator | 2025-03-27 01:32:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:42.933390 | orchestrator | 2025-03-27 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:42.933521 | orchestrator | 2025-03-27 01:32:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:45.986107 | orchestrator | 2025-03-27 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:45.986237 | orchestrator | 2025-03-27 01:32:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:49.038199 | orchestrator | 2025-03-27 01:32:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:49.038336 | orchestrator | 2025-03-27 01:32:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:52.097271 | orchestrator | 2025-03-27 01:32:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:52.097423 | orchestrator | 2025-03-27 01:32:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:55.154549 | orchestrator | 2025-03-27 01:32:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:55.154733 | orchestrator | 2025-03-27 01:32:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:32:58.210813 | orchestrator | 2025-03-27 01:32:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:32:58.210946 | orchestrator | 2025-03-27 01:32:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:01.269408 | orchestrator | 2025-03-27 01:32:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:01.269533 | orchestrator | 2025-03-27 01:33:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:04.321900 | orchestrator | 2025-03-27 01:33:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:04.322077 | orchestrator | 2025-03-27 01:33:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:07.380265 | orchestrator | 2025-03-27 01:33:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:07.380401 | orchestrator | 2025-03-27 01:33:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:10.433555 | orchestrator | 2025-03-27 01:33:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:10.433752 | orchestrator | 2025-03-27 01:33:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:13.493164 | orchestrator | 2025-03-27 01:33:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:13.493289 | orchestrator | 2025-03-27 01:33:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:16.540764 | orchestrator | 2025-03-27 01:33:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:16.540902 | orchestrator | 2025-03-27 01:33:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:19.585693 | orchestrator | 2025-03-27 01:33:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:19.585832 | orchestrator | 2025-03-27 01:33:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:22.638864 | orchestrator | 2025-03-27 01:33:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:22.639009 | orchestrator | 2025-03-27 01:33:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:25.696127 | orchestrator | 2025-03-27 01:33:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:25.696264 | orchestrator | 2025-03-27 01:33:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:28.744595 | orchestrator | 2025-03-27 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:28.744770 | orchestrator | 2025-03-27 01:33:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:31.793972 | orchestrator | 2025-03-27 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:31.794151 | orchestrator | 2025-03-27 01:33:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:34.848206 | orchestrator | 2025-03-27 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:34.848338 | orchestrator | 2025-03-27 01:33:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:37.903832 | orchestrator | 2025-03-27 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:37.903958 | orchestrator | 2025-03-27 01:33:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:40.961414 | orchestrator | 2025-03-27 01:33:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:40.961548 | orchestrator | 2025-03-27 01:33:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:44.018941 | orchestrator | 2025-03-27 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:44.019081 | orchestrator | 2025-03-27 01:33:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:47.072478 | orchestrator | 2025-03-27 01:33:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:47.072682 | orchestrator | 2025-03-27 01:33:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:50.128881 | orchestrator | 2025-03-27 01:33:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:50.129012 | orchestrator | 2025-03-27 01:33:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:53.180382 | orchestrator | 2025-03-27 01:33:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:53.180521 | orchestrator | 2025-03-27 01:33:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:56.232015 | orchestrator | 2025-03-27 01:33:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:56.232162 | orchestrator | 2025-03-27 01:33:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:33:59.282284 | orchestrator | 2025-03-27 01:33:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:33:59.282445 | orchestrator | 2025-03-27 01:33:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:02.330409 | orchestrator | 2025-03-27 01:33:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:02.330529 | orchestrator | 2025-03-27 01:34:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:05.382271 | orchestrator | 2025-03-27 01:34:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:05.382400 | orchestrator | 2025-03-27 01:34:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:08.434565 | orchestrator | 2025-03-27 01:34:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:08.434684 | orchestrator | 2025-03-27 01:34:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:11.488408 | orchestrator | 2025-03-27 01:34:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:11.488565 | orchestrator | 2025-03-27 01:34:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:14.550907 | orchestrator | 2025-03-27 01:34:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:14.551048 | orchestrator | 2025-03-27 01:34:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:17.605296 | orchestrator | 2025-03-27 01:34:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:17.605429 | orchestrator | 2025-03-27 01:34:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:20.654444 | orchestrator | 2025-03-27 01:34:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:20.654587 | orchestrator | 2025-03-27 01:34:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:23.709901 | orchestrator | 2025-03-27 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:23.710094 | orchestrator | 2025-03-27 01:34:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:26.763375 | orchestrator | 2025-03-27 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:26.763501 | orchestrator | 2025-03-27 01:34:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:29.817978 | orchestrator | 2025-03-27 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:29.818163 | orchestrator | 2025-03-27 01:34:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:32.865177 | orchestrator | 2025-03-27 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:32.865318 | orchestrator | 2025-03-27 01:34:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:32.865561 | orchestrator | 2025-03-27 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:35.917008 | orchestrator | 2025-03-27 01:34:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:38.968255 | orchestrator | 2025-03-27 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:38.968407 | orchestrator | 2025-03-27 01:34:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:42.024954 | orchestrator | 2025-03-27 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:42.025126 | orchestrator | 2025-03-27 01:34:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:45.076413 | orchestrator | 2025-03-27 01:34:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:45.076528 | orchestrator | 2025-03-27 01:34:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:48.134809 | orchestrator | 2025-03-27 01:34:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:48.134938 | orchestrator | 2025-03-27 01:34:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:51.188770 | orchestrator | 2025-03-27 01:34:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:51.188896 | orchestrator | 2025-03-27 01:34:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:54.247394 | orchestrator | 2025-03-27 01:34:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:54.247527 | orchestrator | 2025-03-27 01:34:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:34:57.308492 | orchestrator | 2025-03-27 01:34:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:34:57.308679 | orchestrator | 2025-03-27 01:34:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:00.364266 | orchestrator | 2025-03-27 01:34:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:00.364400 | orchestrator | 2025-03-27 01:35:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:03.414920 | orchestrator | 2025-03-27 01:35:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:03.415054 | orchestrator | 2025-03-27 01:35:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:06.468455 | orchestrator | 2025-03-27 01:35:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:06.468584 | orchestrator | 2025-03-27 01:35:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:09.512772 | orchestrator | 2025-03-27 01:35:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:09.512909 | orchestrator | 2025-03-27 01:35:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:12.562063 | orchestrator | 2025-03-27 01:35:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:12.562193 | orchestrator | 2025-03-27 01:35:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:15.609609 | orchestrator | 2025-03-27 01:35:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:15.609838 | orchestrator | 2025-03-27 01:35:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:18.663949 | orchestrator | 2025-03-27 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:18.664074 | orchestrator | 2025-03-27 01:35:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:21.722202 | orchestrator | 2025-03-27 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:21.722349 | orchestrator | 2025-03-27 01:35:21 | INFO  | Task 985cef94-fbdd-435a-807d-f1a13eca3afc is in state STARTED 2025-03-27 01:35:21.723805 | orchestrator | 2025-03-27 01:35:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:24.780250 | orchestrator | 2025-03-27 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:24.780386 | orchestrator | 2025-03-27 01:35:24 | INFO  | Task 985cef94-fbdd-435a-807d-f1a13eca3afc is in state STARTED 2025-03-27 01:35:24.781319 | orchestrator | 2025-03-27 01:35:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:24.781615 | orchestrator | 2025-03-27 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:27.834206 | orchestrator | 2025-03-27 01:35:27 | INFO  | Task 985cef94-fbdd-435a-807d-f1a13eca3afc is in state STARTED 2025-03-27 01:35:27.836782 | orchestrator | 2025-03-27 01:35:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:30.885727 | orchestrator | 2025-03-27 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:30.885872 | orchestrator | 2025-03-27 01:35:30 | INFO  | Task 985cef94-fbdd-435a-807d-f1a13eca3afc is in state STARTED 2025-03-27 01:35:30.886422 | orchestrator | 2025-03-27 01:35:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:33.935395 | orchestrator | 2025-03-27 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:33.935533 | orchestrator | 2025-03-27 01:35:33 | INFO  | Task 985cef94-fbdd-435a-807d-f1a13eca3afc is in state SUCCESS 2025-03-27 01:35:33.936694 | orchestrator | 2025-03-27 01:35:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:36.980579 | orchestrator | 2025-03-27 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:36.980765 | orchestrator | 2025-03-27 01:35:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:40.040056 | orchestrator | 2025-03-27 01:35:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:40.040186 | orchestrator | 2025-03-27 01:35:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:43.090950 | orchestrator | 2025-03-27 01:35:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:43.091088 | orchestrator | 2025-03-27 01:35:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:46.136982 | orchestrator | 2025-03-27 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:46.137112 | orchestrator | 2025-03-27 01:35:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:49.180840 | orchestrator | 2025-03-27 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:49.180971 | orchestrator | 2025-03-27 01:35:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:52.230110 | orchestrator | 2025-03-27 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:52.230259 | orchestrator | 2025-03-27 01:35:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:55.286133 | orchestrator | 2025-03-27 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:55.286368 | orchestrator | 2025-03-27 01:35:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:35:58.345439 | orchestrator | 2025-03-27 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:35:58.345569 | orchestrator | 2025-03-27 01:35:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:01.400916 | orchestrator | 2025-03-27 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:01.401043 | orchestrator | 2025-03-27 01:36:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:04.452926 | orchestrator | 2025-03-27 01:36:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:04.453064 | orchestrator | 2025-03-27 01:36:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:07.504719 | orchestrator | 2025-03-27 01:36:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:07.504848 | orchestrator | 2025-03-27 01:36:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:10.556373 | orchestrator | 2025-03-27 01:36:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:10.556509 | orchestrator | 2025-03-27 01:36:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:13.606997 | orchestrator | 2025-03-27 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:13.607107 | orchestrator | 2025-03-27 01:36:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:16.667110 | orchestrator | 2025-03-27 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:16.667243 | orchestrator | 2025-03-27 01:36:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:19.725422 | orchestrator | 2025-03-27 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:19.725526 | orchestrator | 2025-03-27 01:36:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:22.772065 | orchestrator | 2025-03-27 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:22.772207 | orchestrator | 2025-03-27 01:36:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:25.816401 | orchestrator | 2025-03-27 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:25.816523 | orchestrator | 2025-03-27 01:36:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:28.872176 | orchestrator | 2025-03-27 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:28.872311 | orchestrator | 2025-03-27 01:36:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:31.926822 | orchestrator | 2025-03-27 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:31.926961 | orchestrator | 2025-03-27 01:36:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:34.985508 | orchestrator | 2025-03-27 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:34.985686 | orchestrator | 2025-03-27 01:36:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:38.041352 | orchestrator | 2025-03-27 01:36:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:38.041472 | orchestrator | 2025-03-27 01:36:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:41.093520 | orchestrator | 2025-03-27 01:36:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:41.093687 | orchestrator | 2025-03-27 01:36:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:44.148299 | orchestrator | 2025-03-27 01:36:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:44.148436 | orchestrator | 2025-03-27 01:36:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:47.195411 | orchestrator | 2025-03-27 01:36:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:47.195555 | orchestrator | 2025-03-27 01:36:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:50.249889 | orchestrator | 2025-03-27 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:50.250103 | orchestrator | 2025-03-27 01:36:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:53.305044 | orchestrator | 2025-03-27 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:53.305178 | orchestrator | 2025-03-27 01:36:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:56.351599 | orchestrator | 2025-03-27 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:56.351779 | orchestrator | 2025-03-27 01:36:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:36:59.404874 | orchestrator | 2025-03-27 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:36:59.405002 | orchestrator | 2025-03-27 01:36:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:02.448335 | orchestrator | 2025-03-27 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:02.448468 | orchestrator | 2025-03-27 01:37:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:05.492343 | orchestrator | 2025-03-27 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:05.492483 | orchestrator | 2025-03-27 01:37:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:08.543497 | orchestrator | 2025-03-27 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:08.543678 | orchestrator | 2025-03-27 01:37:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:11.604969 | orchestrator | 2025-03-27 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:11.605122 | orchestrator | 2025-03-27 01:37:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:14.668103 | orchestrator | 2025-03-27 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:14.668231 | orchestrator | 2025-03-27 01:37:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:17.725383 | orchestrator | 2025-03-27 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:17.725513 | orchestrator | 2025-03-27 01:37:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:20.779440 | orchestrator | 2025-03-27 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:20.779572 | orchestrator | 2025-03-27 01:37:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:23.827705 | orchestrator | 2025-03-27 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:23.827847 | orchestrator | 2025-03-27 01:37:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:26.881805 | orchestrator | 2025-03-27 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:26.881900 | orchestrator | 2025-03-27 01:37:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:29.940614 | orchestrator | 2025-03-27 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:29.940781 | orchestrator | 2025-03-27 01:37:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:32.996481 | orchestrator | 2025-03-27 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:32.996608 | orchestrator | 2025-03-27 01:37:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:36.047557 | orchestrator | 2025-03-27 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:36.047787 | orchestrator | 2025-03-27 01:37:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:39.099079 | orchestrator | 2025-03-27 01:37:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:39.099214 | orchestrator | 2025-03-27 01:37:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:42.153217 | orchestrator | 2025-03-27 01:37:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:42.153353 | orchestrator | 2025-03-27 01:37:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:45.217177 | orchestrator | 2025-03-27 01:37:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:45.217319 | orchestrator | 2025-03-27 01:37:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:48.277225 | orchestrator | 2025-03-27 01:37:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:48.277367 | orchestrator | 2025-03-27 01:37:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:51.336394 | orchestrator | 2025-03-27 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:51.336522 | orchestrator | 2025-03-27 01:37:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:54.388583 | orchestrator | 2025-03-27 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:54.388761 | orchestrator | 2025-03-27 01:37:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:37:57.443472 | orchestrator | 2025-03-27 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:37:57.443598 | orchestrator | 2025-03-27 01:37:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:00.502088 | orchestrator | 2025-03-27 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:00.502219 | orchestrator | 2025-03-27 01:38:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:03.561319 | orchestrator | 2025-03-27 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:03.561441 | orchestrator | 2025-03-27 01:38:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:06.619794 | orchestrator | 2025-03-27 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:06.620756 | orchestrator | 2025-03-27 01:38:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:09.659504 | orchestrator | 2025-03-27 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:09.659686 | orchestrator | 2025-03-27 01:38:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:12.712936 | orchestrator | 2025-03-27 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:12.713076 | orchestrator | 2025-03-27 01:38:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:15.765985 | orchestrator | 2025-03-27 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:15.766181 | orchestrator | 2025-03-27 01:38:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:18.817442 | orchestrator | 2025-03-27 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:18.817575 | orchestrator | 2025-03-27 01:38:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:21.870313 | orchestrator | 2025-03-27 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:21.870474 | orchestrator | 2025-03-27 01:38:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:24.923679 | orchestrator | 2025-03-27 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:24.923824 | orchestrator | 2025-03-27 01:38:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:27.983024 | orchestrator | 2025-03-27 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:27.983156 | orchestrator | 2025-03-27 01:38:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:31.032510 | orchestrator | 2025-03-27 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:31.032685 | orchestrator | 2025-03-27 01:38:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:34.101787 | orchestrator | 2025-03-27 01:38:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:34.101912 | orchestrator | 2025-03-27 01:38:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:37.155519 | orchestrator | 2025-03-27 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:37.155714 | orchestrator | 2025-03-27 01:38:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:40.207810 | orchestrator | 2025-03-27 01:38:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:40.207945 | orchestrator | 2025-03-27 01:38:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:43.260298 | orchestrator | 2025-03-27 01:38:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:43.260437 | orchestrator | 2025-03-27 01:38:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:46.317201 | orchestrator | 2025-03-27 01:38:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:46.317330 | orchestrator | 2025-03-27 01:38:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:49.363451 | orchestrator | 2025-03-27 01:38:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:49.363578 | orchestrator | 2025-03-27 01:38:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:52.413259 | orchestrator | 2025-03-27 01:38:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:52.413398 | orchestrator | 2025-03-27 01:38:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:55.456994 | orchestrator | 2025-03-27 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:55.457139 | orchestrator | 2025-03-27 01:38:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:38:58.508953 | orchestrator | 2025-03-27 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:38:58.509084 | orchestrator | 2025-03-27 01:38:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:01.560907 | orchestrator | 2025-03-27 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:01.561036 | orchestrator | 2025-03-27 01:39:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:04.616251 | orchestrator | 2025-03-27 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:04.616377 | orchestrator | 2025-03-27 01:39:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:07.667968 | orchestrator | 2025-03-27 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:07.668128 | orchestrator | 2025-03-27 01:39:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:10.724541 | orchestrator | 2025-03-27 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:10.724710 | orchestrator | 2025-03-27 01:39:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:13.772827 | orchestrator | 2025-03-27 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:13.772955 | orchestrator | 2025-03-27 01:39:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:16.829917 | orchestrator | 2025-03-27 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:16.830109 | orchestrator | 2025-03-27 01:39:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:19.884595 | orchestrator | 2025-03-27 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:19.884755 | orchestrator | 2025-03-27 01:39:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:22.939682 | orchestrator | 2025-03-27 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:22.939814 | orchestrator | 2025-03-27 01:39:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:25.992150 | orchestrator | 2025-03-27 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:25.992281 | orchestrator | 2025-03-27 01:39:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:29.048600 | orchestrator | 2025-03-27 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:29.048756 | orchestrator | 2025-03-27 01:39:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:32.104863 | orchestrator | 2025-03-27 01:39:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:32.104996 | orchestrator | 2025-03-27 01:39:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:35.170014 | orchestrator | 2025-03-27 01:39:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:35.170201 | orchestrator | 2025-03-27 01:39:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:38.226192 | orchestrator | 2025-03-27 01:39:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:38.226315 | orchestrator | 2025-03-27 01:39:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:41.284986 | orchestrator | 2025-03-27 01:39:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:41.285114 | orchestrator | 2025-03-27 01:39:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:44.341103 | orchestrator | 2025-03-27 01:39:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:44.341229 | orchestrator | 2025-03-27 01:39:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:47.398417 | orchestrator | 2025-03-27 01:39:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:47.398578 | orchestrator | 2025-03-27 01:39:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:50.457601 | orchestrator | 2025-03-27 01:39:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:50.457770 | orchestrator | 2025-03-27 01:39:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:53.509264 | orchestrator | 2025-03-27 01:39:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:53.509429 | orchestrator | 2025-03-27 01:39:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:56.554586 | orchestrator | 2025-03-27 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:56.554772 | orchestrator | 2025-03-27 01:39:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:39:59.602523 | orchestrator | 2025-03-27 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:39:59.603073 | orchestrator | 2025-03-27 01:39:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:02.644362 | orchestrator | 2025-03-27 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:02.644499 | orchestrator | 2025-03-27 01:40:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:05.697804 | orchestrator | 2025-03-27 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:05.698567 | orchestrator | 2025-03-27 01:40:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:08.751623 | orchestrator | 2025-03-27 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:08.751794 | orchestrator | 2025-03-27 01:40:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:11.807404 | orchestrator | 2025-03-27 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:11.807524 | orchestrator | 2025-03-27 01:40:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:14.866575 | orchestrator | 2025-03-27 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:14.866757 | orchestrator | 2025-03-27 01:40:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:17.919205 | orchestrator | 2025-03-27 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:17.919341 | orchestrator | 2025-03-27 01:40:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:20.972071 | orchestrator | 2025-03-27 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:20.972203 | orchestrator | 2025-03-27 01:40:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:24.022368 | orchestrator | 2025-03-27 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:24.022509 | orchestrator | 2025-03-27 01:40:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:27.071501 | orchestrator | 2025-03-27 01:40:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:27.071630 | orchestrator | 2025-03-27 01:40:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:30.123335 | orchestrator | 2025-03-27 01:40:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:30.123474 | orchestrator | 2025-03-27 01:40:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:33.176108 | orchestrator | 2025-03-27 01:40:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:33.176255 | orchestrator | 2025-03-27 01:40:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:36.232415 | orchestrator | 2025-03-27 01:40:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:36.232550 | orchestrator | 2025-03-27 01:40:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:39.289555 | orchestrator | 2025-03-27 01:40:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:39.289743 | orchestrator | 2025-03-27 01:40:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:42.339317 | orchestrator | 2025-03-27 01:40:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:42.339453 | orchestrator | 2025-03-27 01:40:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:45.385708 | orchestrator | 2025-03-27 01:40:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:45.385854 | orchestrator | 2025-03-27 01:40:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:48.437831 | orchestrator | 2025-03-27 01:40:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:48.437961 | orchestrator | 2025-03-27 01:40:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:51.493009 | orchestrator | 2025-03-27 01:40:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:51.493187 | orchestrator | 2025-03-27 01:40:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:54.541766 | orchestrator | 2025-03-27 01:40:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:54.541872 | orchestrator | 2025-03-27 01:40:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:40:57.590852 | orchestrator | 2025-03-27 01:40:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:40:57.590984 | orchestrator | 2025-03-27 01:40:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:00.635560 | orchestrator | 2025-03-27 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:00.635744 | orchestrator | 2025-03-27 01:41:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:03.692405 | orchestrator | 2025-03-27 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:03.692545 | orchestrator | 2025-03-27 01:41:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:06.743103 | orchestrator | 2025-03-27 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:06.743239 | orchestrator | 2025-03-27 01:41:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:09.790346 | orchestrator | 2025-03-27 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:09.790488 | orchestrator | 2025-03-27 01:41:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:12.841496 | orchestrator | 2025-03-27 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:12.841631 | orchestrator | 2025-03-27 01:41:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:15.888102 | orchestrator | 2025-03-27 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:15.888190 | orchestrator | 2025-03-27 01:41:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:18.930412 | orchestrator | 2025-03-27 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:18.930550 | orchestrator | 2025-03-27 01:41:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:21.979574 | orchestrator | 2025-03-27 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:21.979762 | orchestrator | 2025-03-27 01:41:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:25.030930 | orchestrator | 2025-03-27 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:25.031088 | orchestrator | 2025-03-27 01:41:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:28.082726 | orchestrator | 2025-03-27 01:41:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:28.082956 | orchestrator | 2025-03-27 01:41:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:31.142290 | orchestrator | 2025-03-27 01:41:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:31.142417 | orchestrator | 2025-03-27 01:41:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:34.199010 | orchestrator | 2025-03-27 01:41:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:34.199139 | orchestrator | 2025-03-27 01:41:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:37.258786 | orchestrator | 2025-03-27 01:41:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:37.258917 | orchestrator | 2025-03-27 01:41:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:40.300560 | orchestrator | 2025-03-27 01:41:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:40.300736 | orchestrator | 2025-03-27 01:41:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:43.354702 | orchestrator | 2025-03-27 01:41:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:43.354826 | orchestrator | 2025-03-27 01:41:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:46.411436 | orchestrator | 2025-03-27 01:41:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:46.411572 | orchestrator | 2025-03-27 01:41:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:49.454145 | orchestrator | 2025-03-27 01:41:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:49.454279 | orchestrator | 2025-03-27 01:41:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:52.520968 | orchestrator | 2025-03-27 01:41:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:52.521141 | orchestrator | 2025-03-27 01:41:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:55.569983 | orchestrator | 2025-03-27 01:41:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:55.570215 | orchestrator | 2025-03-27 01:41:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:41:58.620056 | orchestrator | 2025-03-27 01:41:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:41:58.620882 | orchestrator | 2025-03-27 01:41:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:01.665765 | orchestrator | 2025-03-27 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:01.665859 | orchestrator | 2025-03-27 01:42:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:04.712959 | orchestrator | 2025-03-27 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:04.713097 | orchestrator | 2025-03-27 01:42:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:07.767804 | orchestrator | 2025-03-27 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:07.767940 | orchestrator | 2025-03-27 01:42:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:10.822814 | orchestrator | 2025-03-27 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:10.822979 | orchestrator | 2025-03-27 01:42:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:13.873774 | orchestrator | 2025-03-27 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:13.873911 | orchestrator | 2025-03-27 01:42:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:16.932402 | orchestrator | 2025-03-27 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:16.932526 | orchestrator | 2025-03-27 01:42:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:19.984516 | orchestrator | 2025-03-27 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:19.984685 | orchestrator | 2025-03-27 01:42:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:23.050188 | orchestrator | 2025-03-27 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:23.050339 | orchestrator | 2025-03-27 01:42:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:26.089629 | orchestrator | 2025-03-27 01:42:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:26.089785 | orchestrator | 2025-03-27 01:42:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:29.131972 | orchestrator | 2025-03-27 01:42:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:29.132111 | orchestrator | 2025-03-27 01:42:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:32.178830 | orchestrator | 2025-03-27 01:42:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:32.178967 | orchestrator | 2025-03-27 01:42:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:35.225894 | orchestrator | 2025-03-27 01:42:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:35.226082 | orchestrator | 2025-03-27 01:42:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:38.284049 | orchestrator | 2025-03-27 01:42:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:38.284167 | orchestrator | 2025-03-27 01:42:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:41.341071 | orchestrator | 2025-03-27 01:42:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:41.341207 | orchestrator | 2025-03-27 01:42:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:44.398403 | orchestrator | 2025-03-27 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:44.398538 | orchestrator | 2025-03-27 01:42:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:47.447567 | orchestrator | 2025-03-27 01:42:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:47.447769 | orchestrator | 2025-03-27 01:42:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:50.491157 | orchestrator | 2025-03-27 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:50.491280 | orchestrator | 2025-03-27 01:42:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:53.541873 | orchestrator | 2025-03-27 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:53.542013 | orchestrator | 2025-03-27 01:42:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:56.584245 | orchestrator | 2025-03-27 01:42:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:56.584404 | orchestrator | 2025-03-27 01:42:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:42:59.639524 | orchestrator | 2025-03-27 01:42:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:42:59.639694 | orchestrator | 2025-03-27 01:42:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:02.692513 | orchestrator | 2025-03-27 01:42:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:02.692636 | orchestrator | 2025-03-27 01:43:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:05.743220 | orchestrator | 2025-03-27 01:43:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:05.743343 | orchestrator | 2025-03-27 01:43:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:08.798276 | orchestrator | 2025-03-27 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:08.798411 | orchestrator | 2025-03-27 01:43:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:11.866000 | orchestrator | 2025-03-27 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:11.866180 | orchestrator | 2025-03-27 01:43:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:14.919252 | orchestrator | 2025-03-27 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:14.919389 | orchestrator | 2025-03-27 01:43:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:17.982558 | orchestrator | 2025-03-27 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:17.982737 | orchestrator | 2025-03-27 01:43:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:21.040983 | orchestrator | 2025-03-27 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:21.041122 | orchestrator | 2025-03-27 01:43:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:24.091913 | orchestrator | 2025-03-27 01:43:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:24.092053 | orchestrator | 2025-03-27 01:43:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:27.141850 | orchestrator | 2025-03-27 01:43:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:27.141951 | orchestrator | 2025-03-27 01:43:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:30.188329 | orchestrator | 2025-03-27 01:43:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:30.188555 | orchestrator | 2025-03-27 01:43:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:33.242270 | orchestrator | 2025-03-27 01:43:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:33.242401 | orchestrator | 2025-03-27 01:43:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:36.293866 | orchestrator | 2025-03-27 01:43:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:36.294010 | orchestrator | 2025-03-27 01:43:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:39.341023 | orchestrator | 2025-03-27 01:43:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:39.341153 | orchestrator | 2025-03-27 01:43:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:42.392460 | orchestrator | 2025-03-27 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:42.392606 | orchestrator | 2025-03-27 01:43:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:45.445321 | orchestrator | 2025-03-27 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:45.445450 | orchestrator | 2025-03-27 01:43:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:48.498793 | orchestrator | 2025-03-27 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:48.498940 | orchestrator | 2025-03-27 01:43:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:51.548184 | orchestrator | 2025-03-27 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:51.548313 | orchestrator | 2025-03-27 01:43:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:54.606789 | orchestrator | 2025-03-27 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:54.606930 | orchestrator | 2025-03-27 01:43:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:43:57.666186 | orchestrator | 2025-03-27 01:43:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:43:57.666310 | orchestrator | 2025-03-27 01:43:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:00.719370 | orchestrator | 2025-03-27 01:43:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:00.719513 | orchestrator | 2025-03-27 01:44:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:03.775217 | orchestrator | 2025-03-27 01:44:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:03.775356 | orchestrator | 2025-03-27 01:44:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:06.823474 | orchestrator | 2025-03-27 01:44:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:06.823584 | orchestrator | 2025-03-27 01:44:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:09.868039 | orchestrator | 2025-03-27 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:09.868169 | orchestrator | 2025-03-27 01:44:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:12.911858 | orchestrator | 2025-03-27 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:12.912006 | orchestrator | 2025-03-27 01:44:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:15.957693 | orchestrator | 2025-03-27 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:15.957840 | orchestrator | 2025-03-27 01:44:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:19.022410 | orchestrator | 2025-03-27 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:19.022549 | orchestrator | 2025-03-27 01:44:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:22.071456 | orchestrator | 2025-03-27 01:44:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:22.071601 | orchestrator | 2025-03-27 01:44:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:25.124589 | orchestrator | 2025-03-27 01:44:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:25.124777 | orchestrator | 2025-03-27 01:44:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:28.179485 | orchestrator | 2025-03-27 01:44:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:28.179617 | orchestrator | 2025-03-27 01:44:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:31.227751 | orchestrator | 2025-03-27 01:44:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:31.227901 | orchestrator | 2025-03-27 01:44:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:34.284950 | orchestrator | 2025-03-27 01:44:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:34.285033 | orchestrator | 2025-03-27 01:44:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:37.345737 | orchestrator | 2025-03-27 01:44:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:37.345882 | orchestrator | 2025-03-27 01:44:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:40.398605 | orchestrator | 2025-03-27 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:40.398766 | orchestrator | 2025-03-27 01:44:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:43.450915 | orchestrator | 2025-03-27 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:43.451057 | orchestrator | 2025-03-27 01:44:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:46.500605 | orchestrator | 2025-03-27 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:46.500776 | orchestrator | 2025-03-27 01:44:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:49.555739 | orchestrator | 2025-03-27 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:49.555856 | orchestrator | 2025-03-27 01:44:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:52.610351 | orchestrator | 2025-03-27 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:52.610492 | orchestrator | 2025-03-27 01:44:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:55.659361 | orchestrator | 2025-03-27 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:55.659491 | orchestrator | 2025-03-27 01:44:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:44:58.716446 | orchestrator | 2025-03-27 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:44:58.716576 | orchestrator | 2025-03-27 01:44:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:01.770144 | orchestrator | 2025-03-27 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:01.770249 | orchestrator | 2025-03-27 01:45:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:04.820856 | orchestrator | 2025-03-27 01:45:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:04.820999 | orchestrator | 2025-03-27 01:45:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:07.876562 | orchestrator | 2025-03-27 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:07.876747 | orchestrator | 2025-03-27 01:45:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:10.932102 | orchestrator | 2025-03-27 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:10.932231 | orchestrator | 2025-03-27 01:45:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:13.983482 | orchestrator | 2025-03-27 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:13.983610 | orchestrator | 2025-03-27 01:45:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:13.984024 | orchestrator | 2025-03-27 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:17.028995 | orchestrator | 2025-03-27 01:45:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:20.079058 | orchestrator | 2025-03-27 01:45:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:20.079185 | orchestrator | 2025-03-27 01:45:20 | INFO  | Task 8e73b1b8-2a4d-4828-8e32-4e973e5b31c8 is in state STARTED 2025-03-27 01:45:20.079606 | orchestrator | 2025-03-27 01:45:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:23.133790 | orchestrator | 2025-03-27 01:45:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:23.133909 | orchestrator | 2025-03-27 01:45:23 | INFO  | Task 8e73b1b8-2a4d-4828-8e32-4e973e5b31c8 is in state STARTED 2025-03-27 01:45:23.135908 | orchestrator | 2025-03-27 01:45:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:26.192748 | orchestrator | 2025-03-27 01:45:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:26.192890 | orchestrator | 2025-03-27 01:45:26 | INFO  | Task 8e73b1b8-2a4d-4828-8e32-4e973e5b31c8 is in state STARTED 2025-03-27 01:45:26.194542 | orchestrator | 2025-03-27 01:45:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:29.245703 | orchestrator | 2025-03-27 01:45:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:29.245837 | orchestrator | 2025-03-27 01:45:29 | INFO  | Task 8e73b1b8-2a4d-4828-8e32-4e973e5b31c8 is in state STARTED 2025-03-27 01:45:29.248612 | orchestrator | 2025-03-27 01:45:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:32.295019 | orchestrator | 2025-03-27 01:45:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:32.295166 | orchestrator | 2025-03-27 01:45:32 | INFO  | Task 8e73b1b8-2a4d-4828-8e32-4e973e5b31c8 is in state SUCCESS 2025-03-27 01:45:32.296700 | orchestrator | 2025-03-27 01:45:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:35.346198 | orchestrator | 2025-03-27 01:45:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:35.346336 | orchestrator | 2025-03-27 01:45:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:38.396231 | orchestrator | 2025-03-27 01:45:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:38.396375 | orchestrator | 2025-03-27 01:45:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:41.456901 | orchestrator | 2025-03-27 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:41.457040 | orchestrator | 2025-03-27 01:45:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:44.515557 | orchestrator | 2025-03-27 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:44.515740 | orchestrator | 2025-03-27 01:45:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:47.572902 | orchestrator | 2025-03-27 01:45:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:47.573042 | orchestrator | 2025-03-27 01:45:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:50.622509 | orchestrator | 2025-03-27 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:50.622689 | orchestrator | 2025-03-27 01:45:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:53.675839 | orchestrator | 2025-03-27 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:53.675970 | orchestrator | 2025-03-27 01:45:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:56.724776 | orchestrator | 2025-03-27 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:56.724904 | orchestrator | 2025-03-27 01:45:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:45:59.781901 | orchestrator | 2025-03-27 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:45:59.782095 | orchestrator | 2025-03-27 01:45:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:02.837863 | orchestrator | 2025-03-27 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:02.837990 | orchestrator | 2025-03-27 01:46:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:05.882934 | orchestrator | 2025-03-27 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:05.883068 | orchestrator | 2025-03-27 01:46:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:08.934525 | orchestrator | 2025-03-27 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:08.934654 | orchestrator | 2025-03-27 01:46:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:11.990766 | orchestrator | 2025-03-27 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:11.990896 | orchestrator | 2025-03-27 01:46:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:15.060384 | orchestrator | 2025-03-27 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:15.060526 | orchestrator | 2025-03-27 01:46:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:18.121228 | orchestrator | 2025-03-27 01:46:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:18.121362 | orchestrator | 2025-03-27 01:46:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:21.177927 | orchestrator | 2025-03-27 01:46:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:21.178103 | orchestrator | 2025-03-27 01:46:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:24.228651 | orchestrator | 2025-03-27 01:46:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:24.228847 | orchestrator | 2025-03-27 01:46:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:27.286168 | orchestrator | 2025-03-27 01:46:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:27.286309 | orchestrator | 2025-03-27 01:46:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:30.340909 | orchestrator | 2025-03-27 01:46:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:30.341034 | orchestrator | 2025-03-27 01:46:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:33.389305 | orchestrator | 2025-03-27 01:46:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:33.389419 | orchestrator | 2025-03-27 01:46:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:36.437128 | orchestrator | 2025-03-27 01:46:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:36.437261 | orchestrator | 2025-03-27 01:46:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:39.493272 | orchestrator | 2025-03-27 01:46:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:39.493410 | orchestrator | 2025-03-27 01:46:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:42.534544 | orchestrator | 2025-03-27 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:42.534731 | orchestrator | 2025-03-27 01:46:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:45.595843 | orchestrator | 2025-03-27 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:45.595981 | orchestrator | 2025-03-27 01:46:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:48.636312 | orchestrator | 2025-03-27 01:46:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:48.636441 | orchestrator | 2025-03-27 01:46:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:51.684965 | orchestrator | 2025-03-27 01:46:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:51.685085 | orchestrator | 2025-03-27 01:46:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:54.743435 | orchestrator | 2025-03-27 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:54.743568 | orchestrator | 2025-03-27 01:46:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:46:57.795921 | orchestrator | 2025-03-27 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:46:57.796053 | orchestrator | 2025-03-27 01:46:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:00.846343 | orchestrator | 2025-03-27 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:00.846475 | orchestrator | 2025-03-27 01:47:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:03.905246 | orchestrator | 2025-03-27 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:03.905369 | orchestrator | 2025-03-27 01:47:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:06.954991 | orchestrator | 2025-03-27 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:06.955134 | orchestrator | 2025-03-27 01:47:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:10.002887 | orchestrator | 2025-03-27 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:10.003062 | orchestrator | 2025-03-27 01:47:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:13.056111 | orchestrator | 2025-03-27 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:13.056251 | orchestrator | 2025-03-27 01:47:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:16.108258 | orchestrator | 2025-03-27 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:16.108394 | orchestrator | 2025-03-27 01:47:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:19.153601 | orchestrator | 2025-03-27 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:19.153786 | orchestrator | 2025-03-27 01:47:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:22.195988 | orchestrator | 2025-03-27 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:22.196152 | orchestrator | 2025-03-27 01:47:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:25.246999 | orchestrator | 2025-03-27 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:25.247137 | orchestrator | 2025-03-27 01:47:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:28.303146 | orchestrator | 2025-03-27 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:28.303279 | orchestrator | 2025-03-27 01:47:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:31.356811 | orchestrator | 2025-03-27 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:31.356953 | orchestrator | 2025-03-27 01:47:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:34.419038 | orchestrator | 2025-03-27 01:47:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:34.419169 | orchestrator | 2025-03-27 01:47:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:37.476404 | orchestrator | 2025-03-27 01:47:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:37.476553 | orchestrator | 2025-03-27 01:47:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:40.525229 | orchestrator | 2025-03-27 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:40.525370 | orchestrator | 2025-03-27 01:47:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:43.584473 | orchestrator | 2025-03-27 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:43.584632 | orchestrator | 2025-03-27 01:47:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:46.633466 | orchestrator | 2025-03-27 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:46.633584 | orchestrator | 2025-03-27 01:47:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:49.697025 | orchestrator | 2025-03-27 01:47:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:49.697156 | orchestrator | 2025-03-27 01:47:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:52.749296 | orchestrator | 2025-03-27 01:47:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:52.749430 | orchestrator | 2025-03-27 01:47:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:55.799447 | orchestrator | 2025-03-27 01:47:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:55.799575 | orchestrator | 2025-03-27 01:47:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:47:58.857994 | orchestrator | 2025-03-27 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:47:58.858179 | orchestrator | 2025-03-27 01:47:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:01.912823 | orchestrator | 2025-03-27 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:01.912954 | orchestrator | 2025-03-27 01:48:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:04.960904 | orchestrator | 2025-03-27 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:04.961038 | orchestrator | 2025-03-27 01:48:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:08.019082 | orchestrator | 2025-03-27 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:08.019216 | orchestrator | 2025-03-27 01:48:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:08.020380 | orchestrator | 2025-03-27 01:48:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:11.070943 | orchestrator | 2025-03-27 01:48:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:14.115854 | orchestrator | 2025-03-27 01:48:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:14.115986 | orchestrator | 2025-03-27 01:48:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:17.167445 | orchestrator | 2025-03-27 01:48:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:17.167572 | orchestrator | 2025-03-27 01:48:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:20.219861 | orchestrator | 2025-03-27 01:48:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:20.220002 | orchestrator | 2025-03-27 01:48:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:23.274985 | orchestrator | 2025-03-27 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:23.275123 | orchestrator | 2025-03-27 01:48:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:26.325184 | orchestrator | 2025-03-27 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:26.325315 | orchestrator | 2025-03-27 01:48:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:29.369003 | orchestrator | 2025-03-27 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:29.369129 | orchestrator | 2025-03-27 01:48:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:32.416106 | orchestrator | 2025-03-27 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:32.416236 | orchestrator | 2025-03-27 01:48:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:35.469412 | orchestrator | 2025-03-27 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:35.469544 | orchestrator | 2025-03-27 01:48:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:38.526268 | orchestrator | 2025-03-27 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:38.526398 | orchestrator | 2025-03-27 01:48:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:41.583821 | orchestrator | 2025-03-27 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:41.583947 | orchestrator | 2025-03-27 01:48:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:44.630395 | orchestrator | 2025-03-27 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:44.630545 | orchestrator | 2025-03-27 01:48:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:47.681030 | orchestrator | 2025-03-27 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:47.681159 | orchestrator | 2025-03-27 01:48:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:50.740053 | orchestrator | 2025-03-27 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:50.740187 | orchestrator | 2025-03-27 01:48:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:53.800886 | orchestrator | 2025-03-27 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:53.801012 | orchestrator | 2025-03-27 01:48:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:56.843332 | orchestrator | 2025-03-27 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:56.843455 | orchestrator | 2025-03-27 01:48:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:48:59.906797 | orchestrator | 2025-03-27 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:48:59.906925 | orchestrator | 2025-03-27 01:48:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:02.955748 | orchestrator | 2025-03-27 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:02.955895 | orchestrator | 2025-03-27 01:49:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:06.012519 | orchestrator | 2025-03-27 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:06.012650 | orchestrator | 2025-03-27 01:49:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:09.070174 | orchestrator | 2025-03-27 01:49:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:09.070306 | orchestrator | 2025-03-27 01:49:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:12.136601 | orchestrator | 2025-03-27 01:49:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:12.136764 | orchestrator | 2025-03-27 01:49:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:15.189449 | orchestrator | 2025-03-27 01:49:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:15.189579 | orchestrator | 2025-03-27 01:49:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:18.250901 | orchestrator | 2025-03-27 01:49:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:18.251030 | orchestrator | 2025-03-27 01:49:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:21.306014 | orchestrator | 2025-03-27 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:21.306137 | orchestrator | 2025-03-27 01:49:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:24.356490 | orchestrator | 2025-03-27 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:24.356626 | orchestrator | 2025-03-27 01:49:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:27.414066 | orchestrator | 2025-03-27 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:27.414215 | orchestrator | 2025-03-27 01:49:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:30.478130 | orchestrator | 2025-03-27 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:30.478262 | orchestrator | 2025-03-27 01:49:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:33.525897 | orchestrator | 2025-03-27 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:33.526012 | orchestrator | 2025-03-27 01:49:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:36.579709 | orchestrator | 2025-03-27 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:36.579838 | orchestrator | 2025-03-27 01:49:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:39.631405 | orchestrator | 2025-03-27 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:39.631557 | orchestrator | 2025-03-27 01:49:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:42.682101 | orchestrator | 2025-03-27 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:42.682219 | orchestrator | 2025-03-27 01:49:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:45.730892 | orchestrator | 2025-03-27 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:45.731024 | orchestrator | 2025-03-27 01:49:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:48.783079 | orchestrator | 2025-03-27 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:48.783209 | orchestrator | 2025-03-27 01:49:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:51.834595 | orchestrator | 2025-03-27 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:51.834779 | orchestrator | 2025-03-27 01:49:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:54.883549 | orchestrator | 2025-03-27 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:54.883652 | orchestrator | 2025-03-27 01:49:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:49:57.938883 | orchestrator | 2025-03-27 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:49:57.939024 | orchestrator | 2025-03-27 01:49:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:00.989058 | orchestrator | 2025-03-27 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:00.989187 | orchestrator | 2025-03-27 01:50:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:04.035943 | orchestrator | 2025-03-27 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:04.036076 | orchestrator | 2025-03-27 01:50:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:07.085243 | orchestrator | 2025-03-27 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:07.085375 | orchestrator | 2025-03-27 01:50:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:10.137875 | orchestrator | 2025-03-27 01:50:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:10.138061 | orchestrator | 2025-03-27 01:50:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:13.193396 | orchestrator | 2025-03-27 01:50:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:13.193535 | orchestrator | 2025-03-27 01:50:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:16.243628 | orchestrator | 2025-03-27 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:16.243797 | orchestrator | 2025-03-27 01:50:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:19.300850 | orchestrator | 2025-03-27 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:19.301010 | orchestrator | 2025-03-27 01:50:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:22.360492 | orchestrator | 2025-03-27 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:22.360633 | orchestrator | 2025-03-27 01:50:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:25.411984 | orchestrator | 2025-03-27 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:25.412115 | orchestrator | 2025-03-27 01:50:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:28.462301 | orchestrator | 2025-03-27 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:28.462443 | orchestrator | 2025-03-27 01:50:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:31.525211 | orchestrator | 2025-03-27 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:31.525339 | orchestrator | 2025-03-27 01:50:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:34.568265 | orchestrator | 2025-03-27 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:34.568393 | orchestrator | 2025-03-27 01:50:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:37.618979 | orchestrator | 2025-03-27 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:37.619111 | orchestrator | 2025-03-27 01:50:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:40.678315 | orchestrator | 2025-03-27 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:40.678437 | orchestrator | 2025-03-27 01:50:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:43.735086 | orchestrator | 2025-03-27 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:43.735214 | orchestrator | 2025-03-27 01:50:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:46.791750 | orchestrator | 2025-03-27 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:46.791890 | orchestrator | 2025-03-27 01:50:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:49.850015 | orchestrator | 2025-03-27 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:49.850219 | orchestrator | 2025-03-27 01:50:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:52.905801 | orchestrator | 2025-03-27 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:52.905940 | orchestrator | 2025-03-27 01:50:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:55.961597 | orchestrator | 2025-03-27 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:55.961767 | orchestrator | 2025-03-27 01:50:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:50:59.010259 | orchestrator | 2025-03-27 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:50:59.010383 | orchestrator | 2025-03-27 01:50:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:02.065581 | orchestrator | 2025-03-27 01:50:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:02.065742 | orchestrator | 2025-03-27 01:51:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:05.118524 | orchestrator | 2025-03-27 01:51:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:05.118656 | orchestrator | 2025-03-27 01:51:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:08.171984 | orchestrator | 2025-03-27 01:51:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:08.172122 | orchestrator | 2025-03-27 01:51:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:11.224024 | orchestrator | 2025-03-27 01:51:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:11.224162 | orchestrator | 2025-03-27 01:51:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:14.274079 | orchestrator | 2025-03-27 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:14.274218 | orchestrator | 2025-03-27 01:51:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:17.321394 | orchestrator | 2025-03-27 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:17.321524 | orchestrator | 2025-03-27 01:51:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:20.372573 | orchestrator | 2025-03-27 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:20.372756 | orchestrator | 2025-03-27 01:51:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:23.426148 | orchestrator | 2025-03-27 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:23.426294 | orchestrator | 2025-03-27 01:51:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:26.483271 | orchestrator | 2025-03-27 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:26.483392 | orchestrator | 2025-03-27 01:51:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:29.528610 | orchestrator | 2025-03-27 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:29.528799 | orchestrator | 2025-03-27 01:51:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:32.577153 | orchestrator | 2025-03-27 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:32.577291 | orchestrator | 2025-03-27 01:51:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:35.628178 | orchestrator | 2025-03-27 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:35.628319 | orchestrator | 2025-03-27 01:51:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:38.677739 | orchestrator | 2025-03-27 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:38.677878 | orchestrator | 2025-03-27 01:51:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:41.735196 | orchestrator | 2025-03-27 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:41.735333 | orchestrator | 2025-03-27 01:51:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:44.788552 | orchestrator | 2025-03-27 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:44.788727 | orchestrator | 2025-03-27 01:51:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:47.840564 | orchestrator | 2025-03-27 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:47.840740 | orchestrator | 2025-03-27 01:51:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:50.890515 | orchestrator | 2025-03-27 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:50.890652 | orchestrator | 2025-03-27 01:51:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:53.948577 | orchestrator | 2025-03-27 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:53.948773 | orchestrator | 2025-03-27 01:51:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:51:57.006299 | orchestrator | 2025-03-27 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:51:57.006418 | orchestrator | 2025-03-27 01:51:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:00.060937 | orchestrator | 2025-03-27 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:00.061088 | orchestrator | 2025-03-27 01:52:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:03.112156 | orchestrator | 2025-03-27 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:03.112290 | orchestrator | 2025-03-27 01:52:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:06.162896 | orchestrator | 2025-03-27 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:06.162993 | orchestrator | 2025-03-27 01:52:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:09.212953 | orchestrator | 2025-03-27 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:09.213080 | orchestrator | 2025-03-27 01:52:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:12.260438 | orchestrator | 2025-03-27 01:52:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:12.260571 | orchestrator | 2025-03-27 01:52:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:15.318616 | orchestrator | 2025-03-27 01:52:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:15.318795 | orchestrator | 2025-03-27 01:52:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:18.374827 | orchestrator | 2025-03-27 01:52:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:18.374955 | orchestrator | 2025-03-27 01:52:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:21.431631 | orchestrator | 2025-03-27 01:52:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:21.431811 | orchestrator | 2025-03-27 01:52:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:24.484133 | orchestrator | 2025-03-27 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:24.484237 | orchestrator | 2025-03-27 01:52:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:27.541090 | orchestrator | 2025-03-27 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:27.541223 | orchestrator | 2025-03-27 01:52:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:30.588946 | orchestrator | 2025-03-27 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:30.589076 | orchestrator | 2025-03-27 01:52:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:33.639907 | orchestrator | 2025-03-27 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:33.640036 | orchestrator | 2025-03-27 01:52:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:36.690527 | orchestrator | 2025-03-27 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:36.690652 | orchestrator | 2025-03-27 01:52:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:39.733769 | orchestrator | 2025-03-27 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:39.733900 | orchestrator | 2025-03-27 01:52:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:39.734734 | orchestrator | 2025-03-27 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:42.779840 | orchestrator | 2025-03-27 01:52:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:45.833554 | orchestrator | 2025-03-27 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:45.833742 | orchestrator | 2025-03-27 01:52:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:48.889359 | orchestrator | 2025-03-27 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:48.889493 | orchestrator | 2025-03-27 01:52:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:51.937648 | orchestrator | 2025-03-27 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:51.937819 | orchestrator | 2025-03-27 01:52:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:54.992741 | orchestrator | 2025-03-27 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:54.992898 | orchestrator | 2025-03-27 01:52:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:52:58.050737 | orchestrator | 2025-03-27 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:52:58.050860 | orchestrator | 2025-03-27 01:52:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:01.108943 | orchestrator | 2025-03-27 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:01.109064 | orchestrator | 2025-03-27 01:53:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:04.163985 | orchestrator | 2025-03-27 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:04.164116 | orchestrator | 2025-03-27 01:53:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:07.228930 | orchestrator | 2025-03-27 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:07.229073 | orchestrator | 2025-03-27 01:53:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:10.288652 | orchestrator | 2025-03-27 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:10.288822 | orchestrator | 2025-03-27 01:53:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:13.349009 | orchestrator | 2025-03-27 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:13.349150 | orchestrator | 2025-03-27 01:53:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:16.405047 | orchestrator | 2025-03-27 01:53:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:16.405181 | orchestrator | 2025-03-27 01:53:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:19.455065 | orchestrator | 2025-03-27 01:53:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:19.455207 | orchestrator | 2025-03-27 01:53:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:22.507006 | orchestrator | 2025-03-27 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:22.507137 | orchestrator | 2025-03-27 01:53:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:25.560831 | orchestrator | 2025-03-27 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:25.560981 | orchestrator | 2025-03-27 01:53:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:28.619535 | orchestrator | 2025-03-27 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:28.619666 | orchestrator | 2025-03-27 01:53:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:31.672644 | orchestrator | 2025-03-27 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:31.672805 | orchestrator | 2025-03-27 01:53:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:34.724074 | orchestrator | 2025-03-27 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:34.724204 | orchestrator | 2025-03-27 01:53:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:37.777638 | orchestrator | 2025-03-27 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:37.777827 | orchestrator | 2025-03-27 01:53:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:40.837522 | orchestrator | 2025-03-27 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:40.837663 | orchestrator | 2025-03-27 01:53:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:43.899295 | orchestrator | 2025-03-27 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:43.899422 | orchestrator | 2025-03-27 01:53:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:46.950298 | orchestrator | 2025-03-27 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:46.950444 | orchestrator | 2025-03-27 01:53:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:49.994133 | orchestrator | 2025-03-27 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:49.994226 | orchestrator | 2025-03-27 01:53:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:53.045798 | orchestrator | 2025-03-27 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:53.045937 | orchestrator | 2025-03-27 01:53:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:56.094064 | orchestrator | 2025-03-27 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:56.094208 | orchestrator | 2025-03-27 01:53:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:53:59.140529 | orchestrator | 2025-03-27 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:53:59.140662 | orchestrator | 2025-03-27 01:53:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:02.188999 | orchestrator | 2025-03-27 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:02.189130 | orchestrator | 2025-03-27 01:54:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:05.237115 | orchestrator | 2025-03-27 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:05.237245 | orchestrator | 2025-03-27 01:54:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:08.289896 | orchestrator | 2025-03-27 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:08.290089 | orchestrator | 2025-03-27 01:54:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:11.341097 | orchestrator | 2025-03-27 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:11.341230 | orchestrator | 2025-03-27 01:54:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:14.397481 | orchestrator | 2025-03-27 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:14.397634 | orchestrator | 2025-03-27 01:54:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:14.402826 | orchestrator | 2025-03-27 01:54:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:17.450181 | orchestrator | 2025-03-27 01:54:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:20.506846 | orchestrator | 2025-03-27 01:54:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:20.506976 | orchestrator | 2025-03-27 01:54:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:23.555567 | orchestrator | 2025-03-27 01:54:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:23.555735 | orchestrator | 2025-03-27 01:54:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:26.608259 | orchestrator | 2025-03-27 01:54:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:26.609188 | orchestrator | 2025-03-27 01:54:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:29.655524 | orchestrator | 2025-03-27 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:29.655660 | orchestrator | 2025-03-27 01:54:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:32.711074 | orchestrator | 2025-03-27 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:32.711200 | orchestrator | 2025-03-27 01:54:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:35.765575 | orchestrator | 2025-03-27 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:35.765865 | orchestrator | 2025-03-27 01:54:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:38.815635 | orchestrator | 2025-03-27 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:38.815844 | orchestrator | 2025-03-27 01:54:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:41.873444 | orchestrator | 2025-03-27 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:41.873573 | orchestrator | 2025-03-27 01:54:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:44.928318 | orchestrator | 2025-03-27 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:44.928453 | orchestrator | 2025-03-27 01:54:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:47.976389 | orchestrator | 2025-03-27 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:47.976522 | orchestrator | 2025-03-27 01:54:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:51.031860 | orchestrator | 2025-03-27 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:51.031999 | orchestrator | 2025-03-27 01:54:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:54.078337 | orchestrator | 2025-03-27 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:54.078472 | orchestrator | 2025-03-27 01:54:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:54:57.129236 | orchestrator | 2025-03-27 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:54:57.129370 | orchestrator | 2025-03-27 01:54:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:00.188421 | orchestrator | 2025-03-27 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:00.188552 | orchestrator | 2025-03-27 01:55:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:00.188738 | orchestrator | 2025-03-27 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:03.240660 | orchestrator | 2025-03-27 01:55:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:06.298286 | orchestrator | 2025-03-27 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:06.298424 | orchestrator | 2025-03-27 01:55:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:09.353479 | orchestrator | 2025-03-27 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:09.353638 | orchestrator | 2025-03-27 01:55:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:12.411041 | orchestrator | 2025-03-27 01:55:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:12.411175 | orchestrator | 2025-03-27 01:55:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:15.469388 | orchestrator | 2025-03-27 01:55:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:15.469523 | orchestrator | 2025-03-27 01:55:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:18.520879 | orchestrator | 2025-03-27 01:55:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:18.521024 | orchestrator | 2025-03-27 01:55:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:21.587532 | orchestrator | 2025-03-27 01:55:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:21.587675 | orchestrator | 2025-03-27 01:55:21 | INFO  | Task 4c4b0e90-5b8c-446a-b486-ba3ecbafcb81 is in state STARTED 2025-03-27 01:55:24.661012 | orchestrator | 2025-03-27 01:55:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:24.661123 | orchestrator | 2025-03-27 01:55:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:24.661158 | orchestrator | 2025-03-27 01:55:24 | INFO  | Task 4c4b0e90-5b8c-446a-b486-ba3ecbafcb81 is in state STARTED 2025-03-27 01:55:24.662523 | orchestrator | 2025-03-27 01:55:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:24.662560 | orchestrator | 2025-03-27 01:55:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:27.743094 | orchestrator | 2025-03-27 01:55:27 | INFO  | Task 4c4b0e90-5b8c-446a-b486-ba3ecbafcb81 is in state STARTED 2025-03-27 01:55:27.743866 | orchestrator | 2025-03-27 01:55:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:27.744102 | orchestrator | 2025-03-27 01:55:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:30.812284 | orchestrator | 2025-03-27 01:55:30 | INFO  | Task 4c4b0e90-5b8c-446a-b486-ba3ecbafcb81 is in state SUCCESS 2025-03-27 01:55:33.862119 | orchestrator | 2025-03-27 01:55:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:33.862236 | orchestrator | 2025-03-27 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:33.862273 | orchestrator | 2025-03-27 01:55:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:36.910872 | orchestrator | 2025-03-27 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:36.911016 | orchestrator | 2025-03-27 01:55:36 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:39.964515 | orchestrator | 2025-03-27 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:39.964662 | orchestrator | 2025-03-27 01:55:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:43.018605 | orchestrator | 2025-03-27 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:43.018793 | orchestrator | 2025-03-27 01:55:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:46.058216 | orchestrator | 2025-03-27 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:46.058347 | orchestrator | 2025-03-27 01:55:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:49.108686 | orchestrator | 2025-03-27 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:49.108852 | orchestrator | 2025-03-27 01:55:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:52.181377 | orchestrator | 2025-03-27 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:52.181515 | orchestrator | 2025-03-27 01:55:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:55.241295 | orchestrator | 2025-03-27 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:55.241430 | orchestrator | 2025-03-27 01:55:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:55:58.299164 | orchestrator | 2025-03-27 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:55:58.299290 | orchestrator | 2025-03-27 01:55:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:01.360238 | orchestrator | 2025-03-27 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:01.360376 | orchestrator | 2025-03-27 01:56:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:04.409073 | orchestrator | 2025-03-27 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:04.409202 | orchestrator | 2025-03-27 01:56:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:07.462241 | orchestrator | 2025-03-27 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:07.462333 | orchestrator | 2025-03-27 01:56:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:10.518638 | orchestrator | 2025-03-27 01:56:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:10.518807 | orchestrator | 2025-03-27 01:56:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:13.569430 | orchestrator | 2025-03-27 01:56:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:13.569554 | orchestrator | 2025-03-27 01:56:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:16.633175 | orchestrator | 2025-03-27 01:56:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:16.633915 | orchestrator | 2025-03-27 01:56:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:19.676625 | orchestrator | 2025-03-27 01:56:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:19.676794 | orchestrator | 2025-03-27 01:56:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:22.723017 | orchestrator | 2025-03-27 01:56:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:22.723172 | orchestrator | 2025-03-27 01:56:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:25.769778 | orchestrator | 2025-03-27 01:56:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:25.769903 | orchestrator | 2025-03-27 01:56:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:28.817961 | orchestrator | 2025-03-27 01:56:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:28.818151 | orchestrator | 2025-03-27 01:56:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:31.872149 | orchestrator | 2025-03-27 01:56:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:31.872282 | orchestrator | 2025-03-27 01:56:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:34.924965 | orchestrator | 2025-03-27 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:34.925123 | orchestrator | 2025-03-27 01:56:34 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:37.982222 | orchestrator | 2025-03-27 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:37.982354 | orchestrator | 2025-03-27 01:56:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:41.042131 | orchestrator | 2025-03-27 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:41.042281 | orchestrator | 2025-03-27 01:56:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:44.096049 | orchestrator | 2025-03-27 01:56:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:44.096179 | orchestrator | 2025-03-27 01:56:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:47.143496 | orchestrator | 2025-03-27 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:47.143617 | orchestrator | 2025-03-27 01:56:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:50.206307 | orchestrator | 2025-03-27 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:50.206438 | orchestrator | 2025-03-27 01:56:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:53.255794 | orchestrator | 2025-03-27 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:53.255934 | orchestrator | 2025-03-27 01:56:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:56.301031 | orchestrator | 2025-03-27 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:56.301169 | orchestrator | 2025-03-27 01:56:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:56:59.351128 | orchestrator | 2025-03-27 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:56:59.351245 | orchestrator | 2025-03-27 01:56:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:02.406844 | orchestrator | 2025-03-27 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:02.406988 | orchestrator | 2025-03-27 01:57:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:05.460187 | orchestrator | 2025-03-27 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:05.460326 | orchestrator | 2025-03-27 01:57:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:08.512957 | orchestrator | 2025-03-27 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:08.513132 | orchestrator | 2025-03-27 01:57:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:11.567005 | orchestrator | 2025-03-27 01:57:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:11.567142 | orchestrator | 2025-03-27 01:57:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:14.615525 | orchestrator | 2025-03-27 01:57:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:14.615728 | orchestrator | 2025-03-27 01:57:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:17.671517 | orchestrator | 2025-03-27 01:57:14 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:17.671644 | orchestrator | 2025-03-27 01:57:17 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:20.726816 | orchestrator | 2025-03-27 01:57:17 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:20.727715 | orchestrator | 2025-03-27 01:57:20 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:23.776750 | orchestrator | 2025-03-27 01:57:20 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:23.776889 | orchestrator | 2025-03-27 01:57:23 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:26.823143 | orchestrator | 2025-03-27 01:57:23 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:26.823284 | orchestrator | 2025-03-27 01:57:26 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:29.880907 | orchestrator | 2025-03-27 01:57:26 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:29.881050 | orchestrator | 2025-03-27 01:57:29 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:32.929559 | orchestrator | 2025-03-27 01:57:29 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:32.929755 | orchestrator | 2025-03-27 01:57:32 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:35.970756 | orchestrator | 2025-03-27 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:35.970888 | orchestrator | 2025-03-27 01:57:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:39.021343 | orchestrator | 2025-03-27 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:39.021480 | orchestrator | 2025-03-27 01:57:39 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:42.074092 | orchestrator | 2025-03-27 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:42.074223 | orchestrator | 2025-03-27 01:57:42 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:45.136767 | orchestrator | 2025-03-27 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:45.136900 | orchestrator | 2025-03-27 01:57:45 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:48.191153 | orchestrator | 2025-03-27 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:48.191286 | orchestrator | 2025-03-27 01:57:48 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:51.248619 | orchestrator | 2025-03-27 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:51.248789 | orchestrator | 2025-03-27 01:57:51 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:54.301018 | orchestrator | 2025-03-27 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:54.301190 | orchestrator | 2025-03-27 01:57:54 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:57:57.352225 | orchestrator | 2025-03-27 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:57:57.352357 | orchestrator | 2025-03-27 01:57:57 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:00.411209 | orchestrator | 2025-03-27 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:00.411339 | orchestrator | 2025-03-27 01:58:00 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:03.461490 | orchestrator | 2025-03-27 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:03.461675 | orchestrator | 2025-03-27 01:58:03 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:06.513754 | orchestrator | 2025-03-27 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:06.513868 | orchestrator | 2025-03-27 01:58:06 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:09.564168 | orchestrator | 2025-03-27 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:09.564308 | orchestrator | 2025-03-27 01:58:09 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:12.611736 | orchestrator | 2025-03-27 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:12.611875 | orchestrator | 2025-03-27 01:58:12 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:15.660353 | orchestrator | 2025-03-27 01:58:12 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:15.660500 | orchestrator | 2025-03-27 01:58:15 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:18.707670 | orchestrator | 2025-03-27 01:58:15 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:18.707794 | orchestrator | 2025-03-27 01:58:18 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:21.764102 | orchestrator | 2025-03-27 01:58:18 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:21.764226 | orchestrator | 2025-03-27 01:58:21 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:24.824240 | orchestrator | 2025-03-27 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:24.824377 | orchestrator | 2025-03-27 01:58:24 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:27.876553 | orchestrator | 2025-03-27 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:27.876699 | orchestrator | 2025-03-27 01:58:27 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:30.929426 | orchestrator | 2025-03-27 01:58:27 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:30.929546 | orchestrator | 2025-03-27 01:58:30 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:33.982456 | orchestrator | 2025-03-27 01:58:30 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:33.982578 | orchestrator | 2025-03-27 01:58:33 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:37.040879 | orchestrator | 2025-03-27 01:58:33 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:37.040995 | orchestrator | 2025-03-27 01:58:37 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:40.097802 | orchestrator | 2025-03-27 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:40.097975 | orchestrator | 2025-03-27 01:58:40 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:43.152959 | orchestrator | 2025-03-27 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:43.153098 | orchestrator | 2025-03-27 01:58:43 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:46.220834 | orchestrator | 2025-03-27 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:46.220965 | orchestrator | 2025-03-27 01:58:46 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:49.276320 | orchestrator | 2025-03-27 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:49.276433 | orchestrator | 2025-03-27 01:58:49 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:52.328978 | orchestrator | 2025-03-27 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:52.329121 | orchestrator | 2025-03-27 01:58:52 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:55.377470 | orchestrator | 2025-03-27 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:55.377643 | orchestrator | 2025-03-27 01:58:55 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:58:58.430356 | orchestrator | 2025-03-27 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:58:58.430496 | orchestrator | 2025-03-27 01:58:58 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:01.486477 | orchestrator | 2025-03-27 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:01.486648 | orchestrator | 2025-03-27 01:59:01 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:04.536422 | orchestrator | 2025-03-27 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:04.536565 | orchestrator | 2025-03-27 01:59:04 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:07.593069 | orchestrator | 2025-03-27 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:07.593199 | orchestrator | 2025-03-27 01:59:07 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:10.638388 | orchestrator | 2025-03-27 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:10.638523 | orchestrator | 2025-03-27 01:59:10 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:13.691719 | orchestrator | 2025-03-27 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:13.691878 | orchestrator | 2025-03-27 01:59:13 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:16.734150 | orchestrator | 2025-03-27 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:16.734276 | orchestrator | 2025-03-27 01:59:16 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:19.781663 | orchestrator | 2025-03-27 01:59:16 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:19.781793 | orchestrator | 2025-03-27 01:59:19 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:22.831392 | orchestrator | 2025-03-27 01:59:19 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:22.831533 | orchestrator | 2025-03-27 01:59:22 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:25.876942 | orchestrator | 2025-03-27 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:25.877103 | orchestrator | 2025-03-27 01:59:25 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:28.938770 | orchestrator | 2025-03-27 01:59:25 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:28.938909 | orchestrator | 2025-03-27 01:59:28 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:31.998904 | orchestrator | 2025-03-27 01:59:28 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:31.999032 | orchestrator | 2025-03-27 01:59:31 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:35.061096 | orchestrator | 2025-03-27 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:35.061226 | orchestrator | 2025-03-27 01:59:35 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:38.112252 | orchestrator | 2025-03-27 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:38.112373 | orchestrator | 2025-03-27 01:59:38 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:41.158482 | orchestrator | 2025-03-27 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:41.158670 | orchestrator | 2025-03-27 01:59:41 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:44.214823 | orchestrator | 2025-03-27 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:44.214961 | orchestrator | 2025-03-27 01:59:44 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:47.272333 | orchestrator | 2025-03-27 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:47.272476 | orchestrator | 2025-03-27 01:59:47 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:50.321787 | orchestrator | 2025-03-27 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:50.321920 | orchestrator | 2025-03-27 01:59:50 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:53.376323 | orchestrator | 2025-03-27 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:53.376449 | orchestrator | 2025-03-27 01:59:53 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:56.426940 | orchestrator | 2025-03-27 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:56.427029 | orchestrator | 2025-03-27 01:59:56 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 01:59:59.477924 | orchestrator | 2025-03-27 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-03-27 01:59:59.478098 | orchestrator | 2025-03-27 01:59:59 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:02.524033 | orchestrator | 2025-03-27 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-03-27 02:00:02.524169 | orchestrator | 2025-03-27 02:00:02 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:05.570334 | orchestrator | 2025-03-27 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-03-27 02:00:05.570465 | orchestrator | 2025-03-27 02:00:05 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:08.625811 | orchestrator | 2025-03-27 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-03-27 02:00:08.625947 | orchestrator | 2025-03-27 02:00:08 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:11.676916 | orchestrator | 2025-03-27 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-03-27 02:00:11.677060 | orchestrator | 2025-03-27 02:00:11 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:14.730670 | orchestrator | 2025-03-27 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-03-27 02:00:14.730806 | orchestrator | 2025-03-27 02:00:14 | INFO  | Task 06f38c9e-e3c1-4595-a798-aa145fe6df11 is in state STARTED 2025-03-27 02:00:17.815252 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-27 02:00:17.823700 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-27 02:00:18.522830 | 2025-03-27 02:00:18.522984 | PLAY [Post output play] 2025-03-27 02:00:18.551718 | 2025-03-27 02:00:18.551843 | LOOP [stage-output : Register sources] 2025-03-27 02:00:18.628078 | 2025-03-27 02:00:18.628504 | TASK [stage-output : Check sudo] 2025-03-27 02:00:19.378857 | orchestrator | sudo: a password is required 2025-03-27 02:00:19.684191 | orchestrator | ok: Runtime: 0:00:00.014257 2025-03-27 02:00:19.693929 | 2025-03-27 02:00:19.694068 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-27 02:00:19.740372 | 2025-03-27 02:00:19.740757 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-27 02:00:19.826667 | orchestrator | ok 2025-03-27 02:00:19.835035 | 2025-03-27 02:00:19.835143 | LOOP [stage-output : Ensure target folders exist] 2025-03-27 02:00:20.283480 | orchestrator | ok: "docs" 2025-03-27 02:00:20.283849 | 2025-03-27 02:00:20.521453 | orchestrator | ok: "artifacts" 2025-03-27 02:00:20.753684 | orchestrator | ok: "logs" 2025-03-27 02:00:20.774468 | 2025-03-27 02:00:20.774660 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-27 02:00:20.819449 | 2025-03-27 02:00:20.819716 | TASK [stage-output : Make all log files readable] 2025-03-27 02:00:21.093196 | orchestrator | ok 2025-03-27 02:00:21.103787 | 2025-03-27 02:00:21.103911 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-27 02:00:21.160566 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:21.177488 | 2025-03-27 02:00:21.177649 | TASK [stage-output : Discover log files for compression] 2025-03-27 02:00:21.204646 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:21.228570 | 2025-03-27 02:00:21.228693 | LOOP [stage-output : Archive everything from logs] 2025-03-27 02:00:21.303339 | 2025-03-27 02:00:21.303560 | PLAY [Post cleanup play] 2025-03-27 02:00:21.337714 | 2025-03-27 02:00:21.337897 | TASK [Set cloud fact (Zuul deployment)] 2025-03-27 02:00:21.415870 | orchestrator | ok 2025-03-27 02:00:21.430009 | 2025-03-27 02:00:21.430147 | TASK [Set cloud fact (local deployment)] 2025-03-27 02:00:21.469098 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:21.487669 | 2025-03-27 02:00:21.487784 | TASK [Clean the cloud environment] 2025-03-27 02:00:22.129197 | orchestrator | 2025-03-27 02:00:22 - clean up servers 2025-03-27 02:00:23.012810 | orchestrator | 2025-03-27 02:00:23 - testbed-manager 2025-03-27 02:00:23.104152 | orchestrator | 2025-03-27 02:00:23 - testbed-node-1 2025-03-27 02:00:23.198885 | orchestrator | 2025-03-27 02:00:23 - testbed-node-5 2025-03-27 02:00:23.293251 | orchestrator | 2025-03-27 02:00:23 - testbed-node-2 2025-03-27 02:00:23.390051 | orchestrator | 2025-03-27 02:00:23 - testbed-node-0 2025-03-27 02:00:23.494952 | orchestrator | 2025-03-27 02:00:23 - testbed-node-3 2025-03-27 02:00:23.592722 | orchestrator | 2025-03-27 02:00:23 - testbed-node-4 2025-03-27 02:00:23.688065 | orchestrator | 2025-03-27 02:00:23 - clean up keypairs 2025-03-27 02:00:23.705586 | orchestrator | 2025-03-27 02:00:23 - testbed 2025-03-27 02:00:23.729961 | orchestrator | 2025-03-27 02:00:23 - wait for servers to be gone 2025-03-27 02:00:37.068879 | orchestrator | 2025-03-27 02:00:37 - clean up ports 2025-03-27 02:00:37.269863 | orchestrator | 2025-03-27 02:00:37 - 0883e32d-801d-4036-b2c3-58b1865d5393 2025-03-27 02:00:37.510964 | orchestrator | 2025-03-27 02:00:37 - 35dd892d-a4da-4e5e-a278-7af9ae6f070a 2025-03-27 02:00:37.749083 | orchestrator | 2025-03-27 02:00:37 - 4d25ef40-c7fe-49a0-a02b-afc9b47184e3 2025-03-27 02:00:38.123807 | orchestrator | 2025-03-27 02:00:38 - 70b86d6b-2637-4896-bb23-5958abc99a68 2025-03-27 02:00:38.349825 | orchestrator | 2025-03-27 02:00:38 - 9196fb83-db77-485f-9095-a25235e41813 2025-03-27 02:00:38.538966 | orchestrator | 2025-03-27 02:00:38 - 98a95194-b1fa-4db4-86fd-28b6004e33ff 2025-03-27 02:00:38.726923 | orchestrator | 2025-03-27 02:00:38 - ad4eee81-8c4d-438a-a3db-e80d525bce8a 2025-03-27 02:00:38.965788 | orchestrator | 2025-03-27 02:00:38 - clean up volumes 2025-03-27 02:00:39.107226 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-3-node-base 2025-03-27 02:00:39.147336 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-4-node-base 2025-03-27 02:00:39.188216 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-5-node-base 2025-03-27 02:00:39.234992 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-0-node-base 2025-03-27 02:00:39.278834 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-1-node-base 2025-03-27 02:00:39.320621 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-2-node-base 2025-03-27 02:00:39.362883 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-6-node-0 2025-03-27 02:00:39.405374 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-0-node-0 2025-03-27 02:00:39.444737 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-8-node-2 2025-03-27 02:00:39.486774 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-manager-base 2025-03-27 02:00:39.527691 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-13-node-1 2025-03-27 02:00:39.575312 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-4-node-4 2025-03-27 02:00:39.614065 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-7-node-1 2025-03-27 02:00:39.655692 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-16-node-4 2025-03-27 02:00:39.700722 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-5-node-5 2025-03-27 02:00:39.744847 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-10-node-4 2025-03-27 02:00:39.787459 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-1-node-1 2025-03-27 02:00:39.825633 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-14-node-2 2025-03-27 02:00:39.865045 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-12-node-0 2025-03-27 02:00:39.904794 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-3-node-3 2025-03-27 02:00:39.945901 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-2-node-2 2025-03-27 02:00:39.990496 | orchestrator | 2025-03-27 02:00:39 - testbed-volume-11-node-5 2025-03-27 02:00:40.030260 | orchestrator | 2025-03-27 02:00:40 - testbed-volume-9-node-3 2025-03-27 02:00:40.073454 | orchestrator | 2025-03-27 02:00:40 - testbed-volume-15-node-3 2025-03-27 02:00:40.115025 | orchestrator | 2025-03-27 02:00:40 - testbed-volume-17-node-5 2025-03-27 02:00:40.154266 | orchestrator | 2025-03-27 02:00:40 - disconnect routers 2025-03-27 02:00:40.266007 | orchestrator | 2025-03-27 02:00:40 - testbed 2025-03-27 02:00:40.976864 | orchestrator | 2025-03-27 02:00:40 - clean up subnets 2025-03-27 02:00:41.012165 | orchestrator | 2025-03-27 02:00:41 - subnet-testbed-management 2025-03-27 02:00:41.155728 | orchestrator | 2025-03-27 02:00:41 - clean up networks 2025-03-27 02:00:41.310291 | orchestrator | 2025-03-27 02:00:41 - net-testbed-management 2025-03-27 02:00:41.550227 | orchestrator | 2025-03-27 02:00:41 - clean up security groups 2025-03-27 02:00:41.586199 | orchestrator | 2025-03-27 02:00:41 - testbed-management 2025-03-27 02:00:41.669807 | orchestrator | 2025-03-27 02:00:41 - testbed-node 2025-03-27 02:00:41.751229 | orchestrator | 2025-03-27 02:00:41 - clean up floating ips 2025-03-27 02:00:41.779677 | orchestrator | 2025-03-27 02:00:41 - 81.163.193.178 2025-03-27 02:00:42.226088 | orchestrator | 2025-03-27 02:00:42 - clean up routers 2025-03-27 02:00:42.322126 | orchestrator | 2025-03-27 02:00:42 - testbed 2025-03-27 02:00:43.062410 | orchestrator | changed 2025-03-27 02:00:43.103608 | 2025-03-27 02:00:43.103707 | PLAY RECAP 2025-03-27 02:00:43.103764 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-27 02:00:43.103788 | 2025-03-27 02:00:43.241594 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-27 02:00:43.249335 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-27 02:00:44.019494 | 2025-03-27 02:00:44.019700 | PLAY [Base post-fetch] 2025-03-27 02:00:44.050623 | 2025-03-27 02:00:44.050807 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-27 02:00:44.128933 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:44.144707 | 2025-03-27 02:00:44.144931 | TASK [fetch-output : Set log path for single node] 2025-03-27 02:00:44.200550 | orchestrator | ok 2025-03-27 02:00:44.208205 | 2025-03-27 02:00:44.208327 | LOOP [fetch-output : Ensure local output dirs] 2025-03-27 02:00:44.716642 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/logs" 2025-03-27 02:00:45.005488 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/artifacts" 2025-03-27 02:00:45.284565 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9b1e4d12f4194a679b3d2d6e2f315612/work/docs" 2025-03-27 02:00:45.307903 | 2025-03-27 02:00:45.308085 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-27 02:00:46.123235 | orchestrator | changed: .d..t...... ./ 2025-03-27 02:00:46.123549 | orchestrator | changed: All items complete 2025-03-27 02:00:46.123589 | 2025-03-27 02:00:46.750363 | orchestrator | changed: .d..t...... ./ 2025-03-27 02:00:47.332090 | orchestrator | changed: .d..t...... ./ 2025-03-27 02:00:47.367268 | 2025-03-27 02:00:47.367461 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-27 02:00:47.415834 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:47.422758 | orchestrator | skipping: Conditional result was False 2025-03-27 02:00:47.468075 | 2025-03-27 02:00:47.468169 | PLAY RECAP 2025-03-27 02:00:47.468228 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-27 02:00:47.468255 | 2025-03-27 02:00:47.586894 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-27 02:00:47.590173 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-27 02:00:48.328508 | 2025-03-27 02:00:48.328675 | PLAY [Base post] 2025-03-27 02:00:48.357294 | 2025-03-27 02:00:48.357437 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-27 02:00:49.211108 | orchestrator | changed 2025-03-27 02:00:49.250375 | 2025-03-27 02:00:49.250500 | PLAY RECAP 2025-03-27 02:00:49.250571 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-27 02:00:49.250643 | 2025-03-27 02:00:49.359920 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-27 02:00:49.368825 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-27 02:00:50.119641 | 2025-03-27 02:00:50.119805 | PLAY [Base post-logs] 2025-03-27 02:00:50.136559 | 2025-03-27 02:00:50.136693 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-27 02:00:50.605579 | localhost | changed 2025-03-27 02:00:50.609495 | 2025-03-27 02:00:50.609632 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-27 02:00:50.638755 | localhost | ok 2025-03-27 02:00:50.644854 | 2025-03-27 02:00:50.644963 | TASK [Set zuul-log-path fact] 2025-03-27 02:00:50.668308 | localhost | ok 2025-03-27 02:00:50.686115 | 2025-03-27 02:00:50.686239 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-27 02:00:50.715558 | localhost | ok 2025-03-27 02:00:50.722607 | 2025-03-27 02:00:50.722717 | TASK [upload-logs : Create log directories] 2025-03-27 02:00:51.257897 | localhost | changed 2025-03-27 02:00:51.265510 | 2025-03-27 02:00:51.265661 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-27 02:00:51.811313 | localhost -> localhost | ok: Runtime: 0:00:00.006999 2025-03-27 02:00:51.821837 | 2025-03-27 02:00:51.822004 | TASK [upload-logs : Upload logs to log server] 2025-03-27 02:00:52.427573 | localhost | Output suppressed because no_log was given 2025-03-27 02:00:52.432158 | 2025-03-27 02:00:52.432312 | LOOP [upload-logs : Compress console log and json output] 2025-03-27 02:00:52.508662 | localhost | skipping: Conditional result was False 2025-03-27 02:00:52.526761 | localhost | skipping: Conditional result was False 2025-03-27 02:00:52.537052 | 2025-03-27 02:00:52.537243 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-27 02:00:52.603650 | localhost | skipping: Conditional result was False 2025-03-27 02:00:52.604004 | 2025-03-27 02:00:52.632439 | localhost | skipping: Conditional result was False 2025-03-27 02:00:52.641919 | 2025-03-27 02:00:52.642090 | LOOP [upload-logs : Upload console log and json output]